var/home/core/zuul-output/0000755000175000017500000000000015145516713014535 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015145534064015500 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000320067415145533745020276 0ustar corecore巖ikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIsdr.k9GffHB ?KHtufX]]6f;l6? ow 9]??xI[mEy},fۮWe~7Nû/wb~1;ZxsY~ݳ( 2[$7۫j{Zw鶾z?&~|XLXlN__uαHJ2E$(Ͽ|/+k*z>p R⥑gF)49)(oՈ7_k0m^p9PneQn͂YEeeɹ ^ʙ|ʕ0MۂAraZR}@E1菌.aLk>2M.*x6Ql"%qYHzn4}*|dd#)3c 0'Jw A57&Q"ԉQIF$%* 4B.K$*/Gmt΍L/1/ %T%e63I[wH :7sL0x.`6)ɚL}ӄ]C }I4Vv@%٘e#dc0Fn 촂iHSr`eZWoxCNQF&%3KpNGIrND}2SRC ss]QzH.ad!rJBi`V +|i}}THW{y|cg ?K- oζ_E'c'@e=Vn)h\\lwCzDiQJxT 5=Xzhͷ~V~7>d,QWS!b>JRV5pDVHwԡ/a7]d]sJg;;>Yp׫,w`ɚ'd$ecwŻ^~7EpQС3DCS[Yʧ?DDS aw߾)Vxr KH^>y<h\dN9:bġ7 -t m2m`QɢJ[a|$ᑨj:D+꟮ʎ; {_[8ZJ%PgS!][5ߜQZ݇~Y;uf J+Wni|廂a('IJqM+6Ɍ[=<:|a+.:a4nՒwYbaRvHa}dkD̶*';QT/ɾgm\Sj#3hEEH*Nf äE@O0~y[쾋t=iYhșC 5ܩa ǛfGtzz*з 55E9Fa?Zk80ݚN|:Aaݡ=h6Yݭ#Ⱦlѹ`f_" Jifhs8u'8KwI~3v4&8[qGGk\(Rh07uC"KM/+{N_Ŏ6W>Bߔ)bQ) <4G0 C.iTEZ{(¥:-³xllHSJ Jno#ˏ(c>bugbǎ\J;tf*H7(?PЃkL<{kJ_O{*Z8Y CEO+ob㸵2*3d*mQ%"h+ "f "D(~~moH|E3*46$Ag4aX)Ǜƾ9U Ӆ^};ڲ7J9@ kV%g<~ְ1Y~oN9,A==lM9Ɍx?_6^oS5!90n݌ mr"/QI&do>:i V4nQi1h$Zb)ŠȃAݢCj|<~cQ7Q!q/c!*(U~|x:k/.EoV%#?mc{xt zܳ'鮱iX%x/QOݸ={p)vDa9|o3M!.xR0I(P 'fΑQ)ۢWP Pe~z|ءPQgOiMcĚ$H4x:bl=pd9YfAMpIrv̡}XI{B%ZԎ|n^/GSZ;m#NvPqTcLQMhg:F[bTm!V`AqPaPheUJ& { Z;"e=oi_)8m1`8G}opYG]2u<ΰ+a1tHayɒ aY(P*aaʨ@ΰ<pX X{k[%<~" Fk ic=Ú,u̹/*]lc ؠF e ˒єd2 ;_F;TM\jP' HTI!045Z$.`mjm%ʺsl!C>Egl1$9  ֲQ$'dJVE%mT{z`R$77.N|b>harNJ(Bň0a|LC ءzCwSm~iM cO;m#NV?3ܚ|otoʉ!9:PZ"ρ5M^kVځIX%Gɽ 47Z(ZN~;MM/u2yWtɖ9F.[-cʚmD (QMW`zP~n"U'8%kEq*Lr;TY *BCCpJhxUpܺDoGdlaQ&8#V| (~~yZ-VW"TL {g6R/wD_tՄ.F+HP'AE; J j"b~+'h=TԫeVިO? )-1 8/%\hC(:=4< ,RmDRWfRoUJy ŗ-ܲ(4k%הrΒ]rύW -e]hx&gs7,6BxzxօoFMA['҉F=NGD4sTq1HPld=Q,DQ IJipqc2*;!6~ö9M( Pnuݮ)`Q6eMӁKzFZf;5IW1i[xU 0FPM]gl}>6sUDO5f p6mD[%ZZvm̓'!n&.TU n$%rIwP(fwnv :Nb=X~ax`;Vw}wvRS1q!z989ep 5w%ZU.]5`s=r&v2FaUM 6/"IiBSpp3n_9>Byݝ0_5bZ8ւ 6{Sf觋-V=Oߖm!6jm3Kx6BDhvzZn8hSlz z6^Q1* _> 8A@>!a:dC<mWu[7-D[9)/*˸PP!j-7BtK|VXnT&eZc~=31mס̈'K^r,W˲vtv|,SԽ[qɑ)6&vד4G&%JLi[? 1A ۥ͟յt9 ",@9 P==s 0py(nWDwpɡ`i?E1Q!:5*6@q\\YWTk sspww0SZ2, uvao=\Sl Uݚu@$Pup՗з҃TXskwqRtYڢLhw KO5C\-&-qQ4Mv8pS俺kCߤ`ZnTV*P,rq<-mOK[[ߢm۽ȑt^, tJbظ&Pg%㢒\QS܁vn` *3UP0Sp8:>m(Zx ,c|!0=0{ P*27ެT|A_mnZ7sDbyT'77J6:ѩ> EKud^5+mn(fnc.^xt4gD638L"!}LpInTeD_1ZrbkI%8zPU:LNTPlI&N:o&2BVb+uxZ`v?7"I8hp A&?a(8E-DHa%LMg2:-ŷX(ǒ>,ݵ𴛾é5Zٵ]z"]òƓVgzEY9[Nj_vZ :jJ2^b_ F w#X6Sho禮<u8.H#',c@V8 iRX &4ڻ8zݽ.7jhvQ:H0Np: qfՋ40oW&&ף \9ys8;ӷL:@۬˨vvn/sc}2N1DDa(kx.L(f"-Da +iP^]OrwY~fwA#ٔ!:*땽Zp!{g4څZtu\1!ѨW(7qZcpL)ύ-G~^rFD+"?_h)yh=x>5ܙQ~O_e琇HBzI7*-Oi* VšPȰһ8hBőa^mX%SHR Fp)$J7A3&ojp/68uK͌iΙINmq&} O L-\ n4f/uc:7k]4p8wWLeUc.)#/udoz$} _3V6UݎvxyRC%ƚq5Щ/ۅw* CVo-1딆~ZYfJ"ou1ϵ5E bQ2mOΏ+w_eaxxOq:ym\q!<'J[FJ,4N:=6. +;$v6"I7%#CLTLyi{+ɠ^^fRa6ܮIN ޖ:DMz'rx#~w7U6=S0+ѓ{%w .ʕ+84ztT:eEK[[;0(1Q@ET0>@wY)aL5ׄӫ A^%f+[`sb˟(]m`F3 W((!5F-9]dDqL&RΖd}})7 k11 K ;%v'_3 dG8d t#MTU']h7^)O>?~?_ȿM4ə#a&Xi`O}6a-xm`8@;of,![0-7 4f kUy:M֖Esa./zʕy[/ݩqz2¼&'QxJE{cZ7C:?pM z*"#窾+ HsOt۩%͟A498SwWv|jNQ=-[ӓI(\gJ8@o2k'Hr~4Z(I8!H G8HNW%1Tќ^?GXodՔz q[*ڔC"1Ȋ-R0ڱ}oF4 3vFf#8^Vє+k@ :)@%9@nA B q 62!/ 6G (" u:)fSGAV(e֖t܁ ft~c.!R0N<R{mtdFdHÃФsxBl] " Δ<=9i/ d ␙F9Ґ)Hnxps2wApP!se]I)^ k?'k:%Ѹ)?wɧ6a{r7%]_Ϧi~ԞnZhubW*IakVC-(>Z#"U4Xk1G;7#m eji'ĒGIqB//(O &1I;svHd=mJW~ړUCOīpAiB^MP=MQ`=JB!"]b6Ƞi]ItЀ'Vf:yo=K˞r:( n72-˒#K9T\aVܩO "^OF1%e"xm뻱~0GBeFO0ޑ]w(zM6j\v00ׅYɓHڦd%NzT@gID!EL2$%Ӧ{(gL pWkn\SDKIIKWi^9)N?[tLjV}}O͌:&c!JC{J` nKlȉW$)YLE%I:/8)*H|]}\E$V*#(G;3U-;q7KǰfξC?ke`~UK mtIC8^P߼fub8P銗KDi'U6K×5 .]H<$ ^D'!" b1D8,?tT q lKxDȜOY2S3ҁ%mo(YT\3}sѦoY=-- /IDd6Gs =[F۴'c,QAIٰ9JXOz);B= @%AIt0v[Ƿ&FJE͙A~IQ%iShnMІt.޿>q=$ts,cJZڗOx2c6 .1zҪR "^Q[ TF )㢥M-GicQ\BL(hO7zNa>>'(Kgc{>/MoD8q̒vv73'9pM&jV3=ɹvYƛ{3iψI4Kp5 d2oOgd||K>R1Qzi#f>夑3KմԔ萴%|xyr>ķx>{E>Z4Ӥ͋#+hI{hNZt 9`b˝`yB,k 3_ XC.l.;oX]}:>3K0R|WD\hnZm֏op};ԫ^(fL}0/E>ƥN7OQ.8[ʔh,Rt:p<0-ʁקiߟt[A3)i>3Z i򩸉*ΏlA" &:1;O]-wgϊ)hn&i'v"/ͤqr@8!̴G~7u5/>HB)iYBAXKL =Z@ >lN%hwiiUsIA8Y&=*2 5I bHb3Lh!ޒh7YJt*CyJÄFKKùMt}.l^]El>NK|//f&!B {&g\,}F)L b߀My6Õw7[{Gqzfz3_X !xJ8T<2!)^_ďǂ.\-d)Kl1헐Z1WMʜ5$)M1Lʳsw5ǫR^v|t$VȖA+Lܑ,҂+sM/ѭy)_ÕNvc*@k]ן;trȫpeoxӻo_nfz6ؘҊ?b*bj^Tc?m%3-$h`EbDC;.j0X1dR? ^}Ծե4NI ܓR{Omu/~+^K9>lIxpI"wS S 'MV+Z:H2d,P4J8 L72?og1>b$]ObsKx̊y`bE&>XYs䀚EƂ@K?n>lhTm' nܡvO+0fqf٠r,$/Zt-1-dė}2Or@3?]^ʧM <mBɃkQ }^an.Fg86}I h5&XӘ8,>b _ z>9!Z>gUŞ}xTL̵ F8ՅX/!gqwߑZȖF 3U>gCCY Hsc`% s8,A_R$קQM17h\EL#w@>omJ/ŵ_iݼGw eIJipFrO{uqy/]c 2ėi_e}L~5&lҬt񗽐0/λL[H* JzeMlTr &|R 2ӗh$cdk?vy̦7]Ạ8ph?z]W_MqKJ> QA^"nYG0_8`N 7{Puٽ/}3ymGqF8RŔ.MMWrO»HzC7ݴLLƓxxi2mW4*@`tF)Ċ+@@ts@p_uM Wi·yT D$aǽO8'ccZV yB jd6cUXyd;-s~KM>e:9[_v~\:HP 8'k0t1A!jlX)v/L+NhBUx~Ga>Z"Q8_jTLRKtL L+BT-҂=ll魳Cf[L胍̎`7rIkzS- (J[(6 b FڨRr}(d_w8sc*xMY 6se~AU9c>ˇXf~TTX)Q te~=WtX-sJb?U'3X7J4l+!`vhGT LYF#gNb: Q6ˌ߉J%.Dl2&XL@XY~ O7o}Cw鱷5k7v;;64H+/tfI|hRL+U>y:1V(!7,RPEջ;)QϢ +RVDžuF7MVֆoM~ar*EtIbU>jqour?tq|JJaQ#-~`/$fhnqgTĔO5 ꐌSYPzv9[ezksA`<dkqHN৯s|&*pNaLه5B5H:2% `MRR'xVGUiƇՒ"`"a ߒ"G̾H`yY/'O"Q  hxnd')O,:ضbX!vIaL/x4Jz7\bŀWc#d4M^d/ ʂK0`n%"s#PCoT/*,Z[4b=^NLXXqdl[r9ơRbSx e\rxacR8=Ƅz`!1X.")+T4a|ry !w5yY~ Z3;Y[Jȧ q:iǞ/&T+uIrd31V_Uqb0t/ %I[hq ҕ  O U^>wY~ -`%Űb`XS38W!`znOF7/.C!Pu&Jm l ?Q>}O+D7 P=8! ЛN_[d0Yݎ@2!vZ{Ibi/^cygwpГzY'Ź$[fr;)ٖf ՠ5Kcxg* E QuIP qozkX=.wK߉J%E92' {]ҙ%rXgs+"sc9| m>T]"JرתBΌ-z:ԭ!,Z~eL:U⧘pvzz0V 0'Dco\Z^dnZ7a)AH ߘ§gb'Tu&T~IQg\Ѝp_S( $?uOk|mbP\vە晋cLz6r~CMp!~~hojUc>rw}xxݸǻ*Wu7[cpMY<~/"˘oV܉T6mn \_ߋV_}Z=k-nn sn.*upw pX\_ U-C_wS!|q?E-S_w$-#9?wh:R 4+%ݽs&Z&em-ld b.E1բ${]]Nj"䁖%5#3dCY%HAK1/FnRL3XɯEr^v,bfbIJ'@hX!<[@ ,&,]$*բk+E$dwS:֢̆ Uh``%NĀVecK[ld-'“5XIυU0؋6\h%1GK(-Yv% 'mQ; GdZ%gI-XE]V f#]bClL t1]Cr^ݔ[TN1Vz[~S_v@.yΕ`%3>|ttc‡-5=VW+ ?Vv_ﮓW puȇml/-S:ŸvŀU#-:m\xkjr)_x*8ͼ8! `:m~*v+paRVIr %A%`[oLxSzR怕#H% }sCcv"4Iۄ+/Ĥz'ep2\3ǚc7:$[|a.f0E*0.)ZyT1t3`thU^ۢ, "!Ӱ&jDkM~# (}>ϭQ3ߤ%EV;?P%ٱtl -{2k 8Vbv"ŏݙ.2WM:UbX[cO;ӇR`R^]d+w!ei9H] lX9^vCο -nd+pH`5֤`5d0ϕ[L&\ B0!% !>bJPUck\Ul+瘏Y4Ew`[xٚp,!9V"R1I>bJ` U'm1Ԥ:6I >jz(ʪƸ)z[M>dU/aFRĦaNV]B2/m2>q(z7~WF|\A\-2) ' RG킖|?m~)-&!883\6y 8V p-lprG]斾-3jsqY~ sj\+9[rAJsT=~#0t2ެf¸DŽ,@2,?WYخNr<V` =V[B5!Z\ļǪ:0A*Iucv8\[|۶s L-ky{ K?_a2=c5%C\d\'2J1(Rve:<+A/VRy6 ö+ML-yz,ZlQ^oAnv-{)xǺ--pcl@Ֆ*Vߓ`ڄ(Nc읠}*وGmU`pi|5Ӄ~ &$yx `qJA"*1 [CplmWu'T0^!Yg5;о߾/5I7wfܛT'W_};Gsݸ]qY7߃뭽*{Q=^]{Q_Trin`jSت-ȥ{Q=\;ju[}.˻Z]=F<^[UsywaoőPi{4o0ޮ$WOjbW\cE~yGd }va&C`Jꡝv/aFc%z,T6?h6 E:lUc|T=ƽJ1t|Vm'!m8N$@"Yҫ\r2aR|=(L X1|wrO_g ux1ƱP+${˪^yq>Elq*E< ^X9ۧ@Z +z7$ "i8U 7bSeo'ki?I+/S`ػ޶&W/īd8kqv00dSWx")Yrl$ ƶz.$i*γ]Oȡ*L&9Hp6E)CU|J-fCՅG$p Q04xFEWrsWڡ)8TEd"k=*F-J)z}lte=̊۳Ky/yH"0n"[V T샵Pb5Y@V3ġmP.H f*E-"u'& ͱ@U]:G0'|xnǓ?H':ӌqQM^"SY(o0!vP%O d|O wIي=풲'nXYy//C0Ɨ'k|- b|Y\1É,G:/+nv$~ _b#xY\zd^gm0Mnx `E-D :6 gB_dE+򲞸:"j,}8#b̃ϹJe יm?U#M60ﮃxow'^dAmǮY},7<~Azcz4-tr?# YKOvK'8ǹH%C|J0 Kׇ40ԥCFhhLw=Ss5gA$TZPku P D)dHιk <98Ǧn+ݼ2+_ޕƖn6kͣ:W@D&q0pi_y|d&7-;K H˵uX}#inrK70H//r1@j8@%yr.L{P!dq?ky\4uۋ%fqTOxd ®CC6DՈ%4T""IPŭ^'*|0zm=bch3x9ZQEOu]I w-÷/e3zC_:Q`xuOI$͛!O Ɓ'Eqj&P[SIdY 2nJQ`<2O ol= J!Xv kПL7L@>ӑXJP8"Ag "_l _3Q!Jbj7 OPX֨*d_G0WQoXƣK)G.ד{A5Eu]MQ- ֺ`/y:Ey9Ja,FqW-el IפT׉6e}KJ'auDSU#_p<#N`)|b%?/q']ug~uVh`p@!8% b kzb* [FQT>p7^}x RD#=>G_"C3\{ض##i4FGO)A;dp X%?2Ƞ=K#奈ERK9J;T_s=w6LAER6^MađG5Ӭ]wGX\QX3OsufLUoK E 4,K{hys9D-F7YMHө CYneifUq8]qZdmHP|@}gu#o6~f]i P< ľE5"{H3/p:Φ' F=$oS X$ VeNU Db0 sOgu-$/,1UGLIsYr٥wƻ[2|>{{J, ib|wo,_< C23v"E^ʠ)zaYU2{(?e  |i  />>^2{i\etn.joh}Oy(j>CNdgyehwOs |l 'J G9͈qt}%`9&3i>.,aI"a I!$'$D$ SQt1 wλ[r]$0ŴX74$mjsR?$Yw&x]$ooexz{u>j@0r'SAR|炈)uw>]q~R()U@9T?\W{zu)Ezn3$0|y/Q7_RL NB^[urͤaqw.̛2E5IEpP`Yw~$oNw9ɲyN,4EM PWiF{{l2w^ I8ǾN {9=6/OBaBq ;Jl퐾wҀnkoן/͈:kZiHǓG&#}wDoSez⸃/Z>Mيlwt>]"e b6˃ Y}t+h^rεߔ=b[VԾI"ak= e$>&gϢ=*ٸh,-ݒ:bxR{ZڿDVmY]^:qJۖnঔZ]zީ)ZvyUn{LJE۸ր05oe,m=[i-fy)WRO%N'ti%D91\߂bu# eK^mGE+,iXXz$e穵_OmRͅ RŨCs@5W?q Djki\ 2fY{o͏gN[3h-!LۥTI1Deh okØ_Y0L-SG/U{UA^3eB\{Nƨ${ r:@әV`hd ϊplХ90.Hj*qW>RSI^!Wچ{H*"]I^` LY8^+ ԾVP*Xm-/%j*. {TRF]~Ux4WUVuET*Ϟ"\j\e Q Z\I?G0\QuTk@"TQYK$%M5[pTFb]oRF,]f'mI-?\zx%yv:fݻa EWA1Nìm+ݳV3j,Uk R JkjLB@ QpuL8U5eBbj#:-B\S)GC)!s-[M{K,w hDU6OP#9HIF65Znmh(4/Ԍ?QkƮ"GrGpԩvmOeRb Nccc6ڞ . 2O]5[+W-{{}>U 'PoUE hGy36E(jZR['O"VXɃBc]X זlϳ4MEp,7 =7,EgFv[V8"tuy4S*@vW!c5 u[ImTl=#R9c[AL *tnL1]U+We*!-ϒ(2"IOE␵.jtV*ɰICtRx|ן&[Թ%*kEK҈"Żi3@ےXSYjdޖ D y?3lCU"UI8ãK]Eb[{n+ WY 4V^U}5cW~~WlQfK W>ț2CU2X:CTj }0\4I2@⎙*mY-swq8u 3MYBhYVY{qoQ[?_i(DU  ,G› 0zrcy*7&VURaaκT!zHIg:y*/`#GBw,U^0!EslzJ3{l1<~ջi6~Ab ^'Uwʛ^Yf߆XhU,& J+R9Tˑ\ѵѮb:/$.?QlUa1[}C rrJKȫv,ϤQP%VZK_rm9&c+7J=k-{w`[6UֺNN(>oJ&eeg̱Azj | $sa{v~T%x~Dcۙm\v5~-xFyx̵gs րMKX).\J13t_> (}(w(Q'M!(-^ ^:T]zNW7\;Կ_joBc;ws޶֋! ~xs]npCvSㆷ}wQIMBt5l;Mn«aeq^ Č-\?.noa9&bQ{ٹa]N_H54Dm_{j]goZrl}\?{6 U;uwJwoq8UMR.lN$QKJv|H zX :&4w$O9ak\%fû)mw?/A(Cr?l|Y*+=E¸ 'Po'7PV>^L(&pUQFO:>iLl"z}.c9=le6IZGPI LOՔtv&ʪE/OH62ĨFTNd=0nhӞNt+P  b^s/z mv.6E>^H0`шi%h8es |?.ЎV!W xUw4[u(nO4=[s oDtgu|Z@$~7>薿@ 2bngx7 nEA79帝#@6j[ hZ gG(,2jy݈j(JѸk/pp W?t:|hDt} _0h'X:GX4nJE*ަ rvFP~G28ȜփFTNKiNZ"\Ʒ-Nu ,r>c]jut\L[n7 1-Oٶw`z]H842;xvg·^k'OBmCN[NoB< st~ SMg-5cLߩl_#]+[Zx385>Ex5&<7..ܖ~`^Yr, 0ޕvgRb.xU'qn<`T8wsH-֑<{3s': A\'uKȻn<4|Z,AҲcif Բ;KS̾+j|ibRh N5 arUSm:N]޹(L8m3 anR9нƾr Ip-Rsyׁls0_翲V"yh`\5F)riv"YIɋʐM9ю8rU;DG1Mwͼ(6 ]I" 3_I2* r>^LKqgP)rA -I4 2,117Q_Odf,8nƒ4= XB <'Bؑe*9͐*pm]X=D6W—,0i& (qb^aO7:͊1kZd6pX\|~:RSնLG.*f)Ȁ%>[i"0'Ύ %te"p[:l0C=\;WMq(2%0Iai7:@\4]9Kzs~Z7/'ǯ@KG%XD372m7aGO2 IzӿI"eh`Z,ӂ?-Y,,<3p^/ N-; p`>27- ,f;NѦ8C:S]+%ҧZpWK~4ğyZdM'F`6K2 o!`%f(!^~ٛ-on]*16b|[ѧ5xҨCl)x4P Œ54oV1|ds-J(, *?*/gX;ʜ |+,,-2>C0e.o$ɋWy?&%] ۨzR8߈ng@[twx12uSߵ-E]#o'ǓawGT8N_e foN[K@Qe~$twPpV$R^51[4u=ˏm{HOy19E`A$ 0wJ7IL<#ϳ=&>,-5р5)L'Aک"hW)ZV&|xRMz6{6a =`Hhùz%*'x@-zr}Yprg]Xҍ^^|0;G ]+# 0)r$o e$|ډsQ|+Qʄ뻁cχt)zSBg}9B60<}h F NvEbj{]9dk8'` j 30I'F0!O c? )(f: *lj*8/Tڋ]z5ZPXǡnj&R8!l1 m'v.צks;Lp}6Xg g.Ꭳ0u 8Za?8LĄC9.5`E7@؃cߔf)}0 S;Ly|̧9%`'ͮ/[-5ӂ-ވ6JfyޤBˠ_C1GMEf3*{1AkERcb (@lz,M9Tl_`B2tBJ$Bʐ]Ɣ1JVӸN#>vl]F-Sv N+{lOJeB #I(߁P{P{B G':˄:;N8B= uv ]&݁PwwBI2z=PoOB w ߝPq{@hLh#4ؓ` ıqL ޺Pn!ό:<_De˻䗣l0U7Yxu:HQrͮW9^ᝬBJCވ+q4Xt-1})QCѿJZ_gTݳLy=^: .W߽3?zE$->u3_yLLɻl2*2ΉM^`BYǴ+EAyxz<9$G_<UWhz9e#q+@&m i},T6. .L"e"E@>q=-LE`Wo1? ,!Z{H9Mc`2 Hz+K3xt ӚË#+ʋ?xF0CeIl44 ||#4¨R# U_Zz(~ +Ԑ-L횶ThÇsɟFhAb1" Yj?>ZV?DcM)y [;,*W>P 'tG0UWY*.,˂ɇ+n^@^'üS%3y"|['h;O1NqZD#`V+X' CL:44a@&loT A\ʄGCl€Ux z!S  @mʀ !XЧ1ݫGUw+KHi:fţ<9͒]6@V6Ǎ`GoEr5kĮj~IyVi:ΪU*}u43*$E eW1%&LE)Uya/m.K`}U4%AzX/-w\aU$dt:0n E ҇R)ALx‘I>" )AYb}[ W4PBL?JZ厒֢f-s81V")&!*G=: ' ~KK>,hk"# 5$pj5~fq#0(Ie LiJ_dP^J"q1>v!9 t7L5Kxɂ8L.NAKdXBW8df v*%QGzm!t x2V4~\J˨^E: Y$gRWWy?C5e(I[+$C dګb~B6:?ñ|{ L?>PiҁcY?US=òO92yrO m~cؕ8lktoe.1:M_!'&샿a?jغ8>ѭn/|tu4Dc?z,`#HnSa]RԪ&XVSTYJэ^"pc9n)kS"}gII H"ڈrf [ mIaȟ2 QBde\fzp/IiImT$iIXK=z3-rJMH-j0%^4lEV hMQY)\4Ha?(ޏkP̘&N`h}S"^ՖtH)T: _iNK@g(e [˃MLX)Uw".RE$5|} yb6Uv0VP@\O_IV o 6B_h>"6ʽ3In vt~,n§,TqD.E];>TyNk, e;/G@m8y\CYF\yY3KXʂ4kB8wuI=s='\_Xcʒ!Gs:+RވK81)oyPGUl%UE/^6ʋ8dg!(DTcS%Jn.G@]J˼ǟ!<{f} G'a6%ї2@sl.pØcJn, RZ(YDvJgp b1le}"T&Z`䬽XOH7(ɦA+$s տJy1T5#R^}֍(&Ŕ_šHYEu-sxkhMR2misݶ3iZ g$̯Pvj ot4>^6.!%y>.߶B$B ܆v[dGwRԓh8C7QNݿWD[+b>}ё8m?Ԟo8~4G ?p;pT}#/,Gj Jrd+~{q<$P8K)(qzk6IBR>Aybόt0R]g|$/;_\ǐbC>ſ@tp˽Cۿ{]zA 7KkJkx3h] 6 ϳ:+g[O8vp513IAR~a 8t[G]r [hSQaZ-آEq`)`B4zFe}Ʉ[" vWꑯn+[F{J]ůh~8T oNƘA*g4_JB{~HWFH@"C&xiCֽl6){T-~1}A9p[[-`<,hե8WR4HZh/ xc3ˮT-0gUe01)HX'z4HzpGWB8\{ݥ{xf1ʩ8\ȩ _Q&W׾3 G$ȣ  v\`ZU<بZ! Uj *Htk`QU?(t! -~ A1%ԥ/RpH:_ ֯;ky99 s+B朂UӋMGEtò'3z̹jH:[p*:M)Nwq8 JNgx8=l,@p~m1['1HC'ةir-AwVRlU$1G dvR0>l3CL! ͳ*75b'>n!'ѹKP,BD,Ye]&8t1g7; pA0֋jΈ2rb>jUh `g3?"bxN 4Ԉ:RH=EK!Gv;?r">~OulY `6%NxK?(b`P:R`a5 Ѭ\xhRpw%5uWXUޤz{Gg>l1S$⛗a@6anA[M̰sٻ)9 1f93r"+|uc2c@LMt=NIpVu, ѣ%4tзF0?C/k]iKQ p:|>C9/2`hͪœ3F:{bkJA֐}BDÿXGlI IER f8߸c4s;F^ڥ@i*hdtVUp1_#2s]>1hF FP*Iz)1YHr{5ǔfs4 X` sJ2oanIД, kL1Atb)/g ٌ㾥A]5dRO0H0% & &F9ѺjT_ٺG'cZ@ڂ<AH=Z~HA켹: ~5o.H*c u9GV>>Dȍ2arD8*s[Ҍ!Ms'/]x:2y*Tߒ "B$Y+Qk(<П>^mwTa)l$/O 8E45ShB]K-HC5}g(Θ_@p˧__u{zAfdG̦*Z42 mqZQ"c?^nr쯚Z4ќzNfK.IEY늀<< HIgK/Q(bр8xñ\ Tjno{3& M|GE`lisŨ1ΧH[4_hrg4۠K l 6S9Y|?%M)Nk^(- 1@걃٦f-e$dVÁI<6S?퀹u xzRdJ*%[aELMg1x2[1ozT:&jK^`q褖ԏG(ު6[ A|g[U'vJiƾw79sh'Gծ%|mt#54(RU "AG$۫c k>`Q6 cQǧѾ'd\*zԺݩRt8J+IgV3¨ tKZsᰀ^M.'>)Z+㌆\0HzD #xmgo]ժ?G GMϮenn2CctZ%_O-Z"E{!f_4^1:GRQwJGO-S`0  K4}oT5O:ALnչŜNS9b*?s&s-ϻ8pOoxId"̩γ@Ӳ }7Yfe ֤B5_iEm$~cgo0ރ{րGL7+p&le}"Spm 7eufvg̈|P#>Y8JBt\c P\ob4.&OP$hOO;. BF!,B1H: ,— ERu1nǖ(s] p-(8Ɏ\BcÝ鵻,W&b>GwGus_&mq/ yՈs J=iDI,鹸쌏]c&qvP"kQx":A7- ֮*yK]G0] YRH_%ؼܒ9VaJ 1y&WS [CC? GzAi]sV,8GIO6~~iǂ jrRzü|܃o/ ׺RuʎLE@M xoL5b?85UMGY7F.W7tJ"%Gs UY " G%97VɃ^p\Q8Ñ?rpkj] vr>XEm?$ed^ wbgAeຜ煙L`8ygM> |[ rl{ʌNΡE1{=+PƶEKI3B"E\H^״؉GRX,Q[T NیIֻ-[g'O KI9a *.e@瀝r"\d2vp{Uy6gܪn<-H0)7cڌ[[}ٶKcYsvN@7ƹ.9jDjC%@9CLpI^V$92:]'.)ehhX NB6RZ7c 4Q$q#vv]-v)ؽOgqIsYK)任~tv˂aٷRق09xV"jB+- t B#.S$&me1aVĹ8R@rarRd1b aPRgH&浤 NIMg1~hr=2w$'ɟ{ bQ8t3ϗYp4Zߠ]Q(lO!O N.C˺QS(ГCx sZPZ1@9{vTv&sO2Ut- brY&C#$@cj9h*yx>E҉fȁbU8f.o^o,itIn''W'3fD V>r6M n3ه %g)0h5hIe[>m8pTr]sWK*),K5RM^>7?q@{aܢv fSE%k%;`=XNَǒȣϓ琬28qsnGٲm!x؁?:϶bW.lr`R6Fi6̊ 4v76L櫞|WC[ N?0Cs^K #^*#:1B#8/1-| axn2-1I؂ҿʽ%2G)Ls҃F%O<X2t;qw74if~;:Ɋ?OsxbfRC|Oߜ4V0\78<{^fx:x>-| 6ddGΛKք"MjKxzVZP^|:Zg3 ]kK鿮L<&Qv|~g7Kp$~Vm!)K,?+7\= :ޱ םO.zpw{ NCN~Y܅\NS(><eCPGw^P^SϊIK}W)^޹LEGYv.xq~({ PyeܭGַ s2 ǨӋhbU]v1FCxÛ:i1͝[s,`Q^qۏfb/ w@pA*z!v)֙7Z?.Stx`V'xuUpǨmdT=wjnIWv[@jgYqD0y8Ng]O } hiBjizBY}NNSo&r˱f&3I@Zxu=ŦV1@7ں4ǎ,~uDn\"̗ko$@g;Ήļ\]E*[iX@IMn̉U{|d{S;q4'$s#/kirl΂iB bʯ+sti~UwTQU|Մ8V9""Ϡ-WdKJ-7J~t<ۢo*^r܋fETUy?5U aB0''4=̷ǿwFc"K'7ݥ {cwϦI>0 -}'1(IbLIΪTiyNLgY g0H7% 6koZ_']jǿ`AZd5ڡB2$۴;Ukcpz4<뻉l}t֑58n&jwo-GTm8p=YO% 1΀Qav+X,FzbR|WZJ۔9mO 5a- ˛ h{]Oݥ948_R:lpd6v@ߟ^_fNAɬJ Y3ha&!h;La]6|1_ %pj%@_|Uǀm1sz`.>r 7>b?JxyV>ye9&UC-%.pJ?7y~I.\Y$\~(趆y6 ~gnXZb.֠w_~.y *߳ F6/j+R!l3?wͬ4/G z3#Fk{lƗe1j>KIJ=ܸxX=W\TŠ!jg#5SH+V;r3@﬌=۴|h*͟xj5%|]֏KdbxC_!#8G|֢:xb| E+Pjs!/FwrqļK3XLl6pqjz( @۟z-u7Ӽh{'FTgac5KCNNͭN"NH&8<Ŝf8_6D$Xo: +Kw`WH뼎ZǨg+`p t1G(kwfK[?%?:/xUù3ָtcW}Skpy/yَgs9gσ #sQKb`ÔPscM\e2L Zwn |@+6!2(tвUN! ?&}`͗)b2-(X!|셟%~Uì?G!xŗ.)޹nhݧAPvS/Q^ŞE3\5|O?.J(OFh ( @365q "9aWFk +DGF^4-?(Th;x=xDf;WPnf<ܑDZXclܟgaQRNjb Aea\Jxbⵎ!)@ԔPxzq2Cdb ڴWrTx =YYUeEt͕~9/Du8!= QJ+b؛))Dh.GWCh49*]e**-[oiU|u=\@WN+(.NSX&Vǎq`kc#1C`2G#~ۇ9)jl*P5jw|8T\6 M`I)ry@eVi% Ś[-_|: g%( pv|A1vkS?k+'oSݷyt~D7X/1sŊC$=猻Xٚx$@6cΎVFLOՀ]-Ps]L}-yzJѵrψрΈ>0/3|CFq}}q@Q7÷{Kem޻>AF/z^:Z,,eTrAw]!8Ӓ#ٹ/hpS`6"L3w՟檖{JDh]j_WHUmGXNdԛ/nfOfeki6)7'8g5&NV Kb0'E,Y"%)Y;&zFM^_#s'Y}1b!,ܜC /ACɥ-1P+ftҴün#D2*0NR@ bm 4!D .%ij<3N,یnWIyMPԘ@Qp4RHQI{4WɥYILR 9yBeIXA13YS$4\Q4u!J`Xx_RHk ąXGc 2IdC@zj:(I_(MH*Jq1}#+P-7-i;;7ln9#IIUH(iNP(ւZi7hwq؈uPkS5.%G}q^qjBA[ͥ+c #IʘF 97(Щ\ :@Mڠ@ D=eE@.G1+@mI5#\.֞+\e!+\wƾ݆mr>;s5Rdf``bY#- _3pmo-lpD11[N%]k*|էz9Kۣ=;auڴ rJ,U*en5a/03kZdbE\jچ|JtuʠX}kW3pM%@9:PjIBİ rDjYI2:QT,534' }#z[nMm$j l{6CʮgCP.`/-'sOc`zV|+遰DS^IDjRӟVMz #fxEJD^2vFEq99ca%ʊV}d!X̭z3R1AC*׵34KaS*O U$25I\:ږzX8 Lh?4{iUM&RB:.wZ87P, T3n]>f9J(1JKߛp+;*L$p2Vtl87tZNxV10Dia?L v9ىpn:HqA=-<#"#M~6D J^"RLC葼Qw]ɾ! \c"|ĝb+hـP64ߖy3K2a@I pXbCԢ #+M3uM3pDC=t~G`ToW0#2"lod0P8wHoSwac4,L%uc[B˱0=6P*""6j1k{%[ 04Z^*ҽ1&Ie)"ytt2WUQi[5,.&t1x Uf@PQY)=8<_}N\G_|(eOI&iY5~MJ4,WZ%1ɛGQWxC\׳V妩`}h]t?o;O  ,LbB+:e mP0.Mn2l"+{]VкC_L.l^ϫoιybX\^Ƴ+;#P¸$pbXnYt:{x w8×G￾ {qpAP|w-= 0`8l\{R0v(/D5jxTIv")ʧnv(.TO/'c@GCƠZgq>΁I~Oo -i{m_Yu4|{UxRD.]2%'`mxPײGA*jPoQ/{(T%N9Pvxs)==MA?lgb὘L>  ,Ǘb=L L8(;?F9@y bx4 8.@%\CKxzT$ܟ Xfa sѼ-UӈJ퉩! V l*ˇ_lC6N^@&cOL>vbW[z+*h-FLg  ~ۏr7J"\mu1%ˤ M`ǡw0ပG@1cǠʝ㘣ʖÐ#wW|˅"qgf6 SSXftxϒ.!jz%Q\^ :VH]Zˑ| lL3jY[rVTn$Ga$fr2w{*n@9y?#,tKzI[I 'Q˕y< ئw_rKz1Fz>h&gԷ v?[QD,'F@CHMtֺUrVCoJU22 H`>äx2ɦ&5,V/MieE *0>Kx&X).$s&+)łpGMFTJLľ; W@] zcy*lr9sT _$bua㽜Fj z7La/5\>;0ܨ_=M)K]o7n6tE{_N̢k**t1V߽%evד+WAQYg>pK],K6p ϟkJΨUzh]@ kOxUxV23| Dcaujuq#Oc&{hݠs09Oeʧ3˸ZCe '}IU_M2_bbVvxS}d6OԉkƏnݵT 5}??s[97f%oYs_˸v/A}fƫyV?+ՇvU0᫫>H/ۂb]~\_WK;(Q/yes+o/%ǽm/yQSbϝmK;`wy[O xA:R4wv  G@A =5rrLǎ32$|1!s@9LB3ccI\MD&bO1Zks J*=(g{}渦q|uXz~z7 ZD{sPL`Q8֘ 6g)ʥ0R9 uHROB\NA"UUʩRz4(ZC4{Yn.RU\ o??4Z+bGKx 9F-SE#}Q#Z11D;ē|(!p$d?kj TP\0*6abta d9? ʚ v7H,N3Rvߑzؒ(>//eimgXTIw/. *.@wbBNCbO9F4'+9tɾ HfW %25$2gtJ L`hR8CÄϸ/ j =m'W!9ȮgwDbB FbԤsݡSE)n =``op"`Lۚvc.`%=j| ۍ'4"BOj,hS~p :/*3I6%*1H9f滠"Z* K )=Ôkh66A@Dq2r=VLb+vd=!*:4 p|] XSlxxt(rM5:5flO&zDV'ƉM>1WFѠmh;Zc\ >Z=|h7|jPm8[ٻwlV { tizߏ=#eE5 䢶<z/5?f4Iλӹ{mSxZ 4$߸;2}B3T `w6`VR|ؾ Ķu  gvk⵭yv>}/K?}f'{LFW k TpŸoIێ|ʗۭP*ߘBhD}NnskOaɣ-ƍ|Kv[F>TEeXE& Sd#HXUɷnIӎtȇJרѤQg֏zq/?9vBqe˜ĜcN<e 6zhD^£#2"gMzr@£(E>>L" {aF=}6GG{=DN舼|x0fY`#['s8e"94c$e]4/ax@+:bnOG%1!B%GƄKJ0ZҨ ! #)6<8$Y@K9[5*;]ED7N0N9XjF̷uu3d,c JyB$RtV誕""7㉼ ǀQ. S#C`Jyn,G]-.hEEÄ.P@)(X,2[)Qd,`ݨiGeC+.r7+z&@ c cRKZ|x!*bJC3C)2D Вȶx7".iq@&wUtTf~Tƥ]=9Xh! ~ ,@\H@}&]@SG7XUU`qob:N ! HJqo A"b>.>[.JK^i2ًӀaE b0NHz͖[ol*d_L\R1 vJjd A4rIP z^ -?ƼD~6wunf#t2EȣQlM֍@+ϻ%L3Ho]Ҽ#2h#m*7yOTjJi.E˨=C:ڭݭ""t<ۛ0]y2ruub2ِūh\O70yb91 nVqQu ڝxQBL#2.&/q b}UfT}=4nr=ޖpF" 1 6##F  JZwN0WelrobQԠ9cʌE[q,"*drDbTSSIv*".N0>vRlA?"7۪=0L# E#cEJ-QSqo]|x ^/څSD\pa|xA}D " z<ä(0r!}M7h\D>phiN&YL;tDmXn{!JqgQN̻y-# Fs6EK!F$^Jo C=}4"/y0uCNN/g}~jPtXrz^%}8)e%8Q8FqHND*ۭݭ"zhSF@6ցmࣈ/C0TO.bzB;pN8ְiuZBI^0% oC ɡqcjYga]@\HS3g%{崔־ڗ"+=/.8)ImcB 鸖hy A w!hP!3~$#ٟ"+lR V+')PK ֽ1(I{+ ʑI1K0yen)t%קt3'XHF<sƒw._S1rY\$ݳH5BA=}䔢$HB1eML0Ŏ({IFP*qR%)|ȝqOTx_bUQRﭧf*" gAK'ibg`4fTEInUP 9f]*'O]UၿI 'Wfw=|`9-$!>vqDnc_!%Ym]Z28 0KUtHUOU1;SO!;)쳀OP$CFQEሇ!l)uBgb|ɐՒ$ RQeԭ*C8I^W2 *1X5Ku9G3D>p)TLg;8wyy[`.sg @pՆ EDޣ NWRWD^kt0N,*UHWnUJZoq,©یSa}A=w/ UB‹inx1Nl0xq Kt}e(I|f4=d fD>T nCtf؈}̠q9h'磻. )Hb~NeV{Nߥ|dq^jż QD A}r>Msjj02' F,~[nP<"Ơ-g BfAq@m;V9O9ߤ=BYS.&Q'Pcb᠒QD>Tj#}l& }4t۰XCF`N"rpU4R#9_ *<xGeuI/MpMr P1$I r: DpͅӨCƾ}<ڷCȇMMcsdsGPVJ(%RC %-amEP͗^#`@PvᶰΡ/L?.g޳TA>\A8l1r_{ԀN62>`\$}ȸ$4A:GعiQ֢J~| !rB>W>֠]4.C[H|EKJ ;f Kʢf>)^Gs=‰0dT+s M@!(fXs9W3>krCnL2sG$1|"SRJ3T  RtSM~/e1PhA1᫉5"͢>Ώp^So22=qdY_ͬ/@#6|}H g*V,2-%yB(Ä%HQX<րȇO&]PHHH;V,VE] }=\oe!KZN|1R+j_x\@:!C*HH%0I*Pjw񎀌BLi֑%y u "]9b'|Bd37}N#I;3>2C u|H!M}y}2ް zr&q#mmlឨp# @0q#t`˵zSۀk'$f ĝ|BYoT2U[mH>@=^K#5(wc&MyDwA8!p ]k1qdSy  _$NS,wL 3~ȇ4Shk8ȃKBP2<m~ihB JDe&q!OΕYb"r1ʻdڣ q{( ;Ҫ{UtJ0bsReR"r9[YhQY3]y;!#v*2a| /]阈}]dPvݱ`vjG?rp+SJhXmN1ܷLj}SiC)„Xjͺm`ȇjw}9@!.ȔNx J[P:6"B_P@RLzhp1a$(KfێgWdΥ4D<}P!f&J0Hy!e#^`=<@sЂ!Q[)f%"ƨ>KkAU~|c +@Z߅}TJT4:x2'0|DLiߘݺIRDc]4J=u+ \ACU.ԔB/r"<;gP8]o˖z(CFIT(KۯP%ۀa$jN= 5x: q57P+HS5!R9QDS8r^֭V'(XkߝXJFpx4nPAzFǪwIB&n~PO7@Cu,k}YK\eٺ:5ez;(<<'% #"5 r=eLa$n)|M4v !V9GS *wx._qse59 {;PNiϗu.z@Y-@`arpN19bdDC1Yr7['s2ZĽZdcLMq1Es9ԶR&s.c!yh<ȇ\M<'H#U !#`$F 1: :^B#vk oVvVQf3`r{mOf1>. Zh(, R6H$tTsL/= CF>w<|>^,q@ B!g#V>K627j|&A`0!7})N>h*(#/H6D>PrĔӭ~X/3AIFFNtG L|8HfG[M~o8 !{|SM)x)?P|lTC~VLLQѷ+76E|4IlpD(?5#<x< L!DsTꋒFI8y'S.Lԭ0yr0;!7oJfFL5buɷ>3봥1钥=xPF8IRFa%zKґZ$OR漚[s9D2('x$>nc>2x QNp+s 0P?~=t K}k.;3Y;Y=mEX ~n6}r:Ovb~?cWkH ]_%%s%qQŭ?h^no>lHDbէ?c[[[Kn~&fnxqjW{zjَpZVyoz/ ߎ_ f45/_R}cntJ}_w#,xdYd!RL.$že|JI)O1P›'un7?/s SQ<͗2/Vs~ZS_ 3[YbgQ]rG緊gŇibs o/e~lr[k{B7c_m9I\|7%o\)DAׯ}FZn|hO:S ~߷qrIon6˛hʿ R! E9/v>4rQ0<,>n'skN-?`jW{HMyz<ݴO ❖~k*_ qDrǐK}f ecYuHb^='C+pD7[=ht=;$gҝ&8r3?Xi͠b?p,h娭[,6;nm94Vw]+ LbXdHVڦ<&@8#7ǂkf?Îͷţ2Fj֣.gMivʄy A]&l9Zy_Rڼݢ,IGNiL"K"$x oYȳz[, W[D h9X6EA!p-<"lPDe#W1NGpIz %Δy{SBӱU,Jw#Yt5(}fP9۩+sBw}Ym)Tju{m9qOn0  :TLg*($n<>KY2!D^mB1t+q=sv^±8i'菣)qLP7;K$HgL+C8;t~S;LSD|vg;UΊdԾh=* BdwN\~19JXo5fnb֏ DKͲav/1t=&Ӟ"< 5*X}atu]ͿRN̟arW8tB/ TsR^ {Y%ueJڔ bU^Vy4jKUdl;MX}] ±fܫ٣eR"?9tCQ|3R0eS$TCgJVӠHS^QK] oCWrױY~ūz^-ZRWȫ- "ZU,cyut;uK6az(x#"'ԓۇq)8΢4F* | F\'b]X4NS#-w}B h9kXTR6EiAbn)>t !^9kտ &S?yW~;y{ЌK4l^QP}( B`TN3'boc%ø60pځ,ۮ!SlE9a{Fn$`prl\6A2 d˒"Y_[RzX%[br V5dU_3gh/,*M*%]p=׺3>F9z!E Q=%=S G߯ozc׷z/;^-tdĞWreZIT2=U8'@K 3ӌwF҉OQSf= V19ıdΈ+^zyӄ4aw =c(1.l6 $r;QIT2M, d N\$J1nx%mͯ x]xor6lo:2tRQ:KJi-;IZ/e%-] ΥJ Bd*K@mzY݋C5DujXpiU;XY6CZekdaѝGd_߽h{p/hxVq1n-t۽dRg|Em2F~GwяDe@PEKX2XGb`֦yS_-&ElR 0V-HTiѯ\\jGAP+e]/{nef6 "gHq &~2m1z'gɘ+LB0DXF{^Bm[._g  B/?s2zș+0/H`6iLU_RVrǻ|ln QΨnERgGPك!qv cs_,k? csKF괍76HE789 "8h)I1)dݖ|0ZJν l:Z708_iA6ٕ,7-C) IgG5zc|-.”P4[^Ol&j ~aI5K;hIQ%h4bZ)Z\Q=ƛ3Of2$ l_tRZ£Vh]csy}ˠ L>ڠgoF2A <5RWDxc%q6ZCceq̇x3Z[U7Ox{K={AL0|o3wkM}Oyus-dm:g\%@M+ٍI-M⥨3WprKJPj;qLn zY+g%2N72[%/> VU=j{< h[\^^-taJZEC\].&?>Knj-C \s45TE}45dX˰ j''ې{nCԒf"訲5zJdkIb%5'9ޗ X0LmTa7=aV ֐U쪎ua'h~"a+XKT/`CGmLg/t^ZCɕYbTvdv65[ o7}ͥOϘ'Q罌<pnΫ,%K+BmP 6ЫNGh!uY^p1`u^Y0M#=0ޏד:,N=ያB8uu?~P[ æ_7O^p9mGFy{0K&[U'`SiLCŲ·cЎ Ë`@׏)aK;?hdϛ_> aPy?&ҟ<~"EίyhaNm}5yNƣ^~|0??_)vu}cAZM(X]SUe7]ՕbUSy4_*?_-5|Lݧs+V؎ H īhC L mvB[;O3(!OuQ(]-{k'@Ti^#YBL-eT\0T- Jh"U*`&tqOO_|iU80}Fapsͅ2t( C n2yn`&h#8 l"Z}E_d䊩LCeZ`ѻ^HKXs%.ɩOV: b']Y[d-eĝ˫rݳv ȸ+˨X+hS|*9lZۭV~{B/#; 0vdA[Y%H\SDb &Vǒ26q;a/xMtdb$Υ^T&|w;h$|] vА̕>:I.ߝtoIrR7@K8:>Ey~p*B3]ťRÎvh+W<,Q&jEƨHL'<|{ĕ:Ind^1 SHʛ$.4 lmGr~}ۄ[ pt(b KH&Im=.2]UD-t]Cfl &$V'bbT~;98V=Mm&zXd"Q%&rM&I= vgN*r;y r ؈Y>y|/:{4n2zX lg+xߨLF#ҾFX_\ Kձ~Xd4^:~yVQ IIjmizIp+("pQs|z#u?Ͷd*^^1ds\IE˒g҅ 4 \r !KZd,{*I( i]Uj'U;x6^/!uK`ڒǎ v._B"li`][\D+B/ uo0R }Hyh(c2;D]C!$!Erf3up\)-dUYk,&YP90xkF EWPYuQiX8Bg4MFE XaSY`B^ 62]1%ћ=Z-B|+mEʰ ?HnfV$LG/5Bp )ojQݢV!qñOM T8:bںb_Q/J`N`\rVO>:JJ+퀚#1u. xVJ쾎JZ;m\FDqF0mQfx!g9BFIL)46RVũ*k\f{ zeli!0txt߇妋=P +Dٻ&5OыƃQ^i 2\a  \>5m60,ժT,V[0TldrәyZ^kAS~-iI C !70j{^>ȵ~WogWQg{p999s>3ZK QDx%*&U 3_6Y83sXˆN$sPIthd!A7*\ZUxqf(~[oe1٠יz(UBn !93{8őgZ}X?93l |fN@.jʼn”ԤPdm h۲*kg')q̇ϱ 㠹{|M/w/B3N'oՠ Omnj|4 ]FC>=%vߣat~;  ד٧7܀Pd$}3I]P;_5s'EÌ94moZ0/՘܄˞I@FvMucWo]F^$]Bl]dP {U` p^(b|䖢ɜ/LFR1//߇ 7t)ql9^4H%zFߘ 9оwAbo -i -3Vbqe J9'(BQY|8*fĮB5<ө졤Me KޛӽIA"_˿" IOtf`f$/4Ȳ%TvA>V(뤤*ӛ4b.>OV(X3QZxd}&/$a>*ygp{d Ϣ+)K {eF"`dA0rx<}?`eSl|VǹU&,8h 6/Sɻ^FAA*5MFܗqVk E+9&zn>Ūqܧ^2։2HX͕͐!+w_꜆PXޟfھpqB=scN„(3~m$G ~M$e,YOqg?? 铟03/r.3VDF6qkP59('Q=bᶿ=-% d^`QU
ĴІ$ "dEѝ%lOjq;sve*GTjVVd`P[@PP\ E7\1*0Uzaggl5b>YGrrr"tB[L ]T:_:f.f29z&f% FuH~wspA;Wh ?p-cd<ߑNۧP5.T#Ћ(fa&Yn|.w2%.q}UN$`jSx0 ax5< cXUqsE̻ L*NZ^EL1wB*\ :xW E$H&3&=-2٥h_ >R; /sCL%춪HM! E~7ֿ~Nj6R!Zz^x6_\6Ӈ*Xc9w?.?Qlo.ᄠ?0Jt?|2,pc^8jDL|`˄.+J3lGմD$:`e! 9U˼ gQQlF>fTUFv8~>ߍpa_d2GAk+?jp򯔼``Uq5 #́mmW(q7l=űZZ|spۇD7|jf9DߍdzԈ4lOjOu ~2+FYQp9eL4[1?4c݉cH-~;}1g$s1NY$7Vflzp|!bplrt>b{T [dy G(5RtaڻN*qntڭz9Ĺm "u:`G/3ߊQcYt>X’?3}֝Y:fHPh/}Yt^ +El8}fDܽ^&9b.}L;317oÿr:xw]/.X+yu5/Cw92cxK?Cl/ @dO^<yK~"0i4<tTD38ii) X g+1~Tq) oTƢe*<x)A E1úU}tۘ7tBC6ymY|юztrR^FOi  ~pNo"oڧD=s|Pi: jE:)~nTYˢgyxXv1vVGe:G9Sav dl5q)"x9'wR<#s<4*Vee5qTƞ9:F#PMsq3Dm:Sl$ZNm9k(}`nnRk`#}CKZpf S8!;F`Ukx|Dm2K89 Z७ +0y2Ʃy~?QY{9G&4g AQai14' юr-rj](!LUyuVgΎG/!Vw PvjBOf'("&ppµ;)XzU/3mr4Jȉ! ]#rNGs[b1 lW8AX1:Re>NuRRB}]KOI*YA5@;Fd\TNv6ljm#ErKDkU%Hi!)dIFiCG-R峏!2nfGSdxd$RtbvfGg}h B .tpcLa:V$4B41^DiM!ktޞPE&#NYibþAy zFbpy z fF[,ƺl܎Y5rtȡb SCea1pi sQf1GiAA7[sS&72ΐҙMDy ƿ҆#LGQu5zbzyA՞9:Fl?|O(~' S``)Ae@Txgc(ghR<4kИ$Z~`޳\=stE&F#:ROhfuwgCi7]gTBdR8aNC)^+]#@ᐒ< aQ߀}(<"yf<>m@`8⁷rA~z{(dFޠyE@"hb#hoIQ4mt:Q\l7*#0?\eُN]vGRN Y܄1B<*]x{.v_MCIJ L&-Ɏ'dD 왣cMs\vufwJW~:!+c\x&j-X@JJ)gIQ1,3%-oG`+|T^nE/SZD=styV|ae6V;w"UqPS.Q`f7-UZ"+6~HQ#wT$zk$lB4^9 ˂iA(؃k  k_hWyyPORIU;92Nrh\\ZJa+vTJ 1Ԓt%A;~6ehAVӄ" 2*nLJ Xh~BSK#>gr _>5`eS /U@g' *c BU- V=sm!)sJ 2vA+V [@wPTt(\Z!r. y>;;GӁ+!id93DjCqO0~V'!=%]YӪF_ӁA-<>GVb!i@gtpBC6aR T#2A?]YrsC1:zD%%q at:Qjg(GcCy`Vul3(%$x¬Zє @qʢۈeIgtgңҪ_wV/;ܴ8&ӴЇh}%w {oתIA9ULe{E+gfB;y:ݣRh̜ ."U~ZDx8Lzdz:ZTݟ/wVbX62@_h}痳}̶49xV,3wz?=*.ǰ7 PJ3͔%Ёp;kIgtP[s.*ll ZhgNt/nhY;D$5j2.(X$zfr>r.]!&V3dJNǕJj3s:66Ei*5ϵpb˷ڛwӫ|{9578ǿDOcMۚ7cI~gmhmz=݁QdZ}ϾIv1vg& x1X48Y\:A0 Wi1sU.\y}KnlNiQ@X{`v.֧ &"bi9M³> kz3Sڳ7MeLU)~>,Ʈ%~h'9YX9RFK,*ڙ&_ϺMKk㗹qWOf~MשUti`"΋/̡A3鑪 w>zdIm5|=7yQy=4C)")֯Wbx&fRͽt:;vPr5jcKq8N]˒l:44ᒫ3'%<w+S.JFIrjsP)}=?a~ZWlI 9ۦU qrV3v&t:ϯ ̘z,'1@-zeѥ&G7SXq5( &׸K>S+ż_߶oǏƅ֫rN@UlmF s+ufTϽ;N \l:=Ovƀƶ7{tleq+t4G8>:DL#hKXG[nv\8Βq<(pϚ Dz%nmCi[[=܌[o bVt]W>.U_|gL#d]Bv>Nq_`BU;-C0.qi a)"Le>4 2!y>[3 #`%8舦RgLc0 {()>nOܝh| y}"v4B`7)L Ȯ/\ͻUAԗv Ş-KA[n=s[=އD|n抇3"$>VcT~J`TŨM3 5ވ:0 Q0dL/4Bq`J8+f6cN#ɔA`4Gab Wp$pP~I2q4*Uu|_I,. fqz,^f[!`ʊfNnV=x{xyS:0EXxg6T M3JL*0"~R2jўG狍瀆1,aTX*XI d 1p{E7C4\+o)p)BQǘL;< NVwr,nKX!=F^Σ`̝99z_"kf^Ilۆ^-vۍ(7$v_aɒF'K,itN2`M!|V#@JTP<4,Ƹ% k $$#RHD^XuS>sl8;vF1xws'Y{~ pv_a[asIegU.`kGU0\ʤXySU xU @57kQ񷨖`ZPXw8<#`Dq,|gLHaDX P( {L! IENG z6߽\dZHPmX pN8C q8Lk_2k9ު gnԛ:P,HiL*"SfAJ|R0$:|~YI,ifʧmğfϸEyq`X%eSY02j]P[.6 O"^7b&8^\s.-@#Tb5fX%HOTiE-̷H51|xoW@ceƞGJn13}×ɛ(ɓhm<#`%>Z8j4֝Q$s'KW|*6Ia5bЇ5p-Wۢ(bo"R``#Dd4CS)(ԎLQlaT 9#?%7 d;"PGZd\vr6uoc{<dY~ vNM,ZWC23RV=QoWms||T^̎/K SzTNm\khE'-60`UG>\uokcZ>ԏ'im0cu<[^x wtם0yOjR햩*ɧO[3woBM(knJMdH3]lh/]IDi\Q-߯H$A|~mlzQdlNoMǭF1w4s/Ďgg _rt~>~z~?< &8ysgT݅с[=HHiX ҄t9%]C4]ӭT]om}7.2R/ }HWjkry\sW0/*}=]6QꇛNm^j QmtuT쪬M~e'np@](@=qe_xɹVB9z,*jN H(KN=8$%vDI8vfz{{}4pzf{DG(NBd,AX0a"X-ZP']VffۛOQk}>Y=!<բ7M6?H`oPrecW>zQix`,(:}cJ+gI1) oOOٯJ2_NnLE X+G{K5|2d݌˯ݢ|j[|me|흦@66jT4Z%A9M{r Sb"|& t>Y'_~fG=0FGQ$:T(LtND"Y~6T`)ŠfUsӊz[⥊'^KīhC*ӡNj $avNaZu,1`X4* #UZPqO28&ş"RӺyq[OfzB65Cc[q_}Eq^ܙZaԫYyfś:9 ISPZԳԷZ,EM:DAXm?ltWռҌ4)^4XjV퀒걉Q'0ӫlͪ'p]{7^f\1bdMy'gI^*X˒ R^?z]`k@,.v _Tj;[(46#UjVeRHU١iđ[cgzl9pkXWqF`RGTRJ1Zmc>M0& cyz LN5v@[wlsnlsNȱvΛjKmrt cߥIP k߆mJfl2L{7IWFҙ kٛIS-T*}\C^e>[ |4%o;ώn%`)]ӎcD"7no3#x&=iջd{kMOU%]pۚu06Oٞu(zdU9VڌWY+UB#K i$`Y'!a$Zo[ ٟ%F +"\߽2[` spBMFƒEiC )< Ehs80EXS6JqTQܦTWEJ#aREb?ID#ɠ"AE6v"uM)q Ñ${|tG@'SL1NOx43G8`Ry%XxIO|KXjWQZqLO1SH33. \ *Օ13k)q7mjr?yJQM8v4'\?>O(%iSGjDd!x\ L1AivDOx!E1H9k;>X'k pݓ"SEmX^xb l=Qv,|Z8q'pin{ k&n_e=R`o.߆9gr4 .$qNK>hc0w\ lHfa9*rQϹ BHp*$Z!iM ]9`38Fu;mh)Pfb+ӱqy0+YZFR20c H -,7Iyk; 4yo?} <>u Iau`uj*… -vHIVS.*Sx !"ct^ J1uf: vt:ι< C][ o'd|lw;NGtwbwU#oad|t>/TDm/k=??]o-)Sz6NZݟ!Ͽ4C˥M¯ _@*Ē1Wp˷ō5?]ONo0~W׳.P r2*?ԴzEF]x=.&. M#յ kGb|H!d7 >\*4K_] ,=]9̶-._;*AG],rӨ1d94]9_F}= H03|Q}ϫy(FxC/Wۅ?;ѻo/^mջ}w^}&ş/޽{HFSPS `awŏZ54*bhB\m}.yøwŇ>ոK=p# J?^~u_5rt:5(p&~6`Q/gZ1ky[,f`"Z=Sd2ݐuQ`mz"\s1BY4:"Z`bL:gX["=5VZc{9 uQVn1dppzf{DG,u[I0H)bTr9 hՂ:L/20wd|x↏Z{訑Pw纵>Y=ruy?V^viI@ItB䖈=d#Ϣy-#-t_SHC,F: lp6HF_jR  K0U6*wUvOr׌䯆VێA[xolϖփXZ |>]nj WYN|ǣ8L*kZhieb5VؔXoiзi- q{`x aڷhQYncN`]Ȫy@u;lDx[ӨڱEn?~@ ˺ mZCS;GΗ\|-[ڲ czeZi;̇ѧʟ\EsSl~SwW,Eg{ۊNg50 w֞ +s4/oX5dkv-Nl=G4e0 }2r ǃ A \Ju E>CD(">l|}A&3Ca3KDq41V ^`%F0,JHl5q!E)kMjHZP0\L0aY(ͨ9mBă_<=F6݅Qh8p_) g GExBFiB$ ͱ2*52`hI8f.ޣ8 $¼&R{FFЗpN,9=(.p愥Hx;Posa)\@KeVѸTmU0u͔#EDM7{ ռZcroz`u_irtp`o0#Z",RMc lږrFmG$c)]2Add7xkkY)>Ád\q:F"|J޴;=O}rު7'?t#"x絙x9pJE7;b,XH2%d,͑~+5$??E?)e럐ɓnMsbpE p`\iQ`jPB>ރT H fX(G2乊D MQEJP_$Fp⹑F J9#J`a"ЈN$߫ #%Ҹ4\f*N]LJ+, v̻3"X# Njr ° JVGmz<5f[i6| b(Dw /&ehg6j U\"ul %g,2a50qʵ8p|&"| * N=޾) 熣4TdR)JPZ U1}F$?)xoC&kbP“ v@k}rCUmӸUB*a{s&~k Po)I*2}D.0xϻH2@FHI # z$2@FUI # d$$e-# d$"'DJ{̄Eˍ)s[NtnAgNA+/5xAܵxx:M`޵>q#e_r (rmy&UIjT>jUxF"(Y8 4uX8t`]6ؗSa]Бa_sgq DeLvD1eLv]d1eLvf!fT>p0 v`(;l$"0XO|ZRN Rr=cbDc 0Ne"Hj9VHtcmm΀u[uad)#A\Pɉ8:#RI+>CIL칏.\.ڒϻ ߉s!-rE{2nFh0v4bcBǔ@ƅ8`*ܟ˲R7SXLb2x h*%X\iP'MI.*gD-B,״!JZ wWylRצq3ͯQ-m_P <A^Qb"O6z|H/$v(en/~ݯ/63^䜝| 'om/ؔKWYtЯ@Nr7T(Jܟ;qC Hxg C2fW(;=؂0Mpﭒ{v6n7xыCkUjb=CF *U:n[j5 m癃⨮"G<[cg-ﵗ}TMlD7e{m=?\GL(|/7=-r-(U8/I߮`xG}F5YuUzv1¯͓.?8]fHS1̅8fnY횓G?^]~K{ ލN: q4`żGO\ .w'^)mnUY>ƈ  `>7}58UYm9?Gj|Gx◦y\W_~ͫ_WߜOrzW|3pYA<4v֥]3[tMiIަ_Y-jزژ[ cgՏW_·YsbZ듟|5"d(Ta6Em)?o_o bjY>-v|P@%nOu>J¾iqK@6" #\ $e12;MNM1r录8 :Lg$=aP->|: gZИHXSF01`pHu@tWGC1h x->imv{nlU6Jlx^t[Dx\e=8C]m} ^3~7}@f99Hl< ˦n>Bi8O ކm)r-,A9 JTP9u*yVg7!)4J0!Ie`skHd"tJ L`hRNʜ4gpȰ=wS%q0str1ulb#A=CoI}1CvC# JXDTk9( ~:ud JPPPƄpU&q"Os):,n2o[I4)||Tefh&^up-z(67n/PgMu!Ai"tM2<_$ހ oM3(Qڎ3`/zpUV/q%>0mTT*s&]@N</׺iaW.k2Xz1Pd K9[5v}ylP4h/ Һ\ˇA|-NվE^F$ up!&|{enbo׿o-+gv 9h{: ճͼSx˓Wf>,y1^8OqVM23`Yp9Oϧ?QH4ff*a.@_N?Utu|W׸soǿƵeۊ寫~ukr1S.[P]opj_)~jѫAq@ P!Mk[Q7i}6,heA;-UxM9@rsV?UgI25WcWI~q}ʛVzɄ#R^n zr ,A5IL6{}n7Wmz# ʵTInܣ`ao“6-}еS9tk˛Y,=JMM֫d;) dkMr+Ж>6k=lK燭; ڳ_rӗntcKIR;u.lꚎǗ`b&j*z}[saaFId_%Jz:wޙm-%\X>}p^3=$Px#Ż} ~yffQ'/LNW>4^ūVG)MqsvPyS/M& 1L&u#$ R7ݭeJ ǸX6ժJ#Thr/x QU]XKܦB*)5ۤ%3nkz۪f|*չn37;߬G<Ͻpҫrup@7X@El`^J{t-sx5X}\b uV鐕ʖYXV! q.e- X/.>.m4_̡3evCF8l^aCF,0-;Z-ɧUC+iM 6d(=h@"pE8xn7ya%{:6_|5ɻ!_`RXBH kмx߽TnɼCc&(IɀfߢCC81y QD8:"kMJ $"αE:qeZ`-Z(@&E|mZ(@$\J&A!ni `ł A6-!$y$U%bY&lumZ(A<'3NLXب|^7Aa2>y/>Px.N|L lJ 0K<ʰ- QZs؊2(8 COMXUDsTFd̷hR^yp\)3- r|`xoB A6iyXW+ɂP#*߳Ϸi[P(&&WyZ }5$-!^;2Ql>"4C a޵5鿢V-D4pU$U8ټf4"eqxI<"HXQ}uO7(geBJO%NHI$qJ<[} 5 kk^m*:AbMJdqJ-A3`d-BU?ɢ4 7!pAjdѳoc*k*ܬ!I3d6#D%(AE)uisL-| 5%ɝ/|3!$q%m+FGUk&-B@a.2Ǥ]doμ@$-%k'!u)-eDF^7)cЉ=N[rZb](`]k3o8d$ZEl c]Z倫6)Ta^>p%A[)bf;ł .j0ϙa}>Z{-^}kBϚ׾2:dZgk-41 ݕ :ZzPsec@&$gDˉ1m&tI0d(`^q_dB/t?AϔJEu| 5WBY-NE '9s%7 ̣w[۷E8&% %,AX'Z@ f!)QoJFh6ƹ :@wPyc29dI6'R BP*8| 5BՎdzDJ %3:fl>薈 TT{ŁyH%L D[諵0߁B捱ՙDXJĈR: bRIQC0-.0o#@yJJ1Ғ+m=ӅB fՙ:G-%DT-*'!5߅B %TO_L:)]Q41ƭZGߔ.j0hWR kĒW3y+L`"tia*>1r(I%LٲuPyM'蔥%BYE<8۸.0oAl{17"d,~9!߾TF18i5۽<*Ze_eړI6̜fޱjBޓhRjYqj_!q#*׷h*D lw$8zC88i//֕4[q](*<_ϣR X9<"vL!b93< * Fu Q0dps;:1 k%5zW!w7g|32/Z]ăΪd\™decfgMNfƐp@] ߺǜyGG;x`5ϔQc9ZȭōiqA'qq/K9~3Kr7-oܼU SG=W;tBtk뺍w,g+]m++bg .NًPmʉAJ6V>" oD5c*vr6:1:)`J=sNz:YY g#7{_̤8dR0kg0AOx&i 򮬋MgyhEMu"zvRĎ1F<>O;͉$gBcBg .a=̧_=G[c>NW,bJl$oOM4i2pK,vs.ZQT\+%v#gOD6|^!DRWe'uYpw;ԋKSd59ϴ`tJ.%%33x6pނ X1Z7=˝^iTԙ'!"L ؛\T&}nJuqeNo㦶fI-7b;iCӯOΓ;^^҅3i\㳳kVH~g窕Js+Lvxw9w"S^yr=4W;-ݟs_x6[g ׳0~\vJFm%qcuٵL);SdgsNi}ٗ4hk}a2X2{HJi)ތrvQKޓ#HS$˄:*l)c'[#w&qc>Km s`୍c!sJUj7Ԧ &,~+jGi RqAWNp]~JSG,TQ+m9R&f¨.=zU i[g"_=QV^;̮]+>ݞZ=l{LHK2ѐ֚R[?%_ok$d)8ҩ:9NL|`x<0L :(7Rw$N>9> ߳?[+3=5m{`ڬDoˏn^ReЊHQf*[4 69: >Ć657YQR;(8T:ҲnI}Q6ѷy [ Nu"^*=XJ4آ5`E_7;)PE$[佭ouic~Ů l׽XQGd_ te61QY}mbrqzqo~ D/<ozF^=`ӗ6_8ޕ'@Q Dk`Eg褷cNi"fF{җAoUUA_o{U܆ՎP7m"lSl=+w-DZ_~NK%d:餫=z^=iVыc}vPyThS$Z ~徭n޻kbGϓ@>VjUqU̖3u߽-o޸]hw$S+o(u KY?I걠-,k)ӵj,=t;XjZd#i⒦qT lR~> o}CPX){!cw#HG֎zBO(x#'4yKJ+rpqCO. O1(8Axgˤbbj`Gm!GK\t:vD] H+>A09 ˯?߿#0׷x %0m.wN<4L{ mj -Ft\e\U>^>͸3; #Y@$ ﮯnJ䦈jpjG<,lxe~xʥpjhfY \f 1C,dž/2!vxN3#SL&i@H kGyP%mKߛ!8E#Xtko{'цŞG%F.ۙ,e%yϞZ+ws{NvQb[),e)l5̣*:*쩷K*:49|w@C5X 4aҫm :xr3o0MVn7E<ݳ%4#4Bt!ӥ\շxV\i^5!B;>$nZZ"Hʉ.\Mg4D5"4]θ ʙS`oyVT*v{gꪠ/l@K&. ,m%h)C%4R%HhKAeWm:o^\ <(pHOcF?\(g,cB $V:@(EyJX8cV>2mOg3D_1(b/`4)skm+GE^Ќy)àX`v@&(R߷xt$fCG:!Y,X}Ɠg[,P*$6>0](1)Ͻ[0&GԧNٹly.ȰVG;g/W/5{?l2M ,3fZ+ʪYs (hhmDmtW{A:VqX.k݇bm-ۃA5xp)7? "inQ;{"C$>GХ`d҂ ʳyIG?bc==q,Pb4z9ySֆS8R2fawL$3DQjC'EiN1έҀ'޻7!]D<Y3ZLdՖrQpfFϒbr<\Htʁ)/ך:yzvq*- %;ߺ.|5w XؖxazCk1ft&f*QR0H<MtɪS1&eVtUԮefۻIɷIOf2{@#2h.r*炗}A(xQsuFk9V2 ڴl8u+Ұ,vTwF Lɵ&ۅeզp! 3?}&Ӡ&SC \9DdSsf0Xmf L658ߏvdf{Z>he9xzӹa+']4ֽH')*4Jgb1K28צd2kE4 򶢼sg0GT,X}@AD ȻfŮEkط5\/\yVֻ\Pc! f8GeCKgwXg医J@qkRx8͌[JA94]^[G]Tzh]u;R%= iMcJPЂ.y4]ZLh;G0ϱlnxp~{;G`J*Ә S9c9G?'j%sZgB'.R arN<ȃp (KM);VD˵htq)Q1$f4)g$UƓʶX`hXLvy v݂1?MxÍeStYFвszRAӿ&8=2cVJ0AСEmc/HJYkt{ ľ#mkۃA5X|Tm`y $d0F6? "in\; $C$>G7(5ӡ\( 0lT0$&n;6BJQlN3CdVUr_;MG7#HsU6ǙoWIa0!h-= J8x8V32fcꜭr矯3o+\VE4WD?2"+>" )Jwl 7ʬa m̻O - {k `K5f}1݅O'0QT[=~ ˯ p{KzVrR&|g1'sytn<88RIA\8߂Q}2K$Bu"i!CG1ң+@Z>~TH; pHbCx-\r{L_{vԇhiM BX(FSy8˒6BL93 YD2%FkmB;0oW+@x}&N71CỾ~S,ߓ_ |Vc G5p GF wX LlVx<(#b6'4iPBT;=7#'xwRiQ^W=whfїf{9g{ķlut|lnTBF\]$bQ(/!HQK''N@P4;N&jvl-Qߠa 5Tx0T*#x?ĎT15f4[)O?M9%v~yp14SP?'xI1& (Nv`;8gelE0ą9>8J,ZO+TbKC0!, <1W"Kd^KIeq,[Z>^J\6kZ*'8J^I}/>J̻Wgyz"$;\~?(xGedἺ36OYgygK 7%va?kH}e}L]9/o鵜[ixh2Ԟߣy{ȫY}A#N3iD:D]Hdt%;Q뽖 ??-ʹm[l^DqߓIo{Zm0Z~_㟜9=+tmeP~BތϣKZNZȝ̚Ƴ!@Awds2\͝%;dg& fPr{7+yrT܋kR\Q[\eƭxrF$f"U2mlL+('׷w7_O?̞Y>g 㴭TZ[2Ϊd eu%$+6|6ӅI,(P r%uB/Iy.tm9dc1kr5+ 9e[OwP8BH #YpNWGNzNzqN:hΨӵ/@T8CP )cp[4F392J5j6}wFӻoMٝkjnn\d!D؍R&noov҄4m3x~hz~vڴW6tѝ۵MEutindg]>_(pnW7Z~xp㋛-<=ow{o6|̚Go8B?yt 2=$Q˥cɨ78xx~8C?CW<7V= yQSW` L]+04%+0up>NrS-^/s d pL)VS͒Q)^gA VyEA_;P_UOdhm8O21qVjV S]a+Lu0դ:Ť2`U[ѫGƤ G?Y< !Pýpj4WRr=o5Y腻Fiy},r Jl0F#sAQy0wem~;iS?ڙ$@2'1Ze)R!){\nHlݬ٫;f(UaA V%KN{WkOMo7!_.ߞVdiֲ\-c|[.ߖ˷m|[.ߖ˷m|[.ߖ˷m9öՎz-tB0(I/P:"}Qro6/p '…[YqUor6=GT+/6#VCue8J!Md7mkb,:].Ov򡋅7|ʤtDR{~E ҝ[Tt ^Zm"ԨnntT.y)/#e/FشSOJZ?EଣEXA-Nz1Vf0ܶWѿǶv[>NzAPE͓e˧L|ޙ}XDqʭ>i׼eolmy;z'~ܿfQ)/tSu]H{aR)>v!Oh3ѻ[J>m^W];䗠$|s_6}W+S;f~ҋYl'މ`Z&~Q:_.N1[NXW^~sn4LwedKgŐm/FوwcE(v$`#җK VA%0J\8/)_+\ 0F+U2.TlTiL@D c%9rxXrt? s] ңy#[ @_B+#9A]F%z8ΐ 2y)AnI>p&DED:υfB)jcE i!kǹ2D224Cpv]Z~'<д%r)&4Yw %?t v38oX;-k4&w fX!ZccBĔ !PN<0s [d]|'SԠs&qsPN)eds">fMJ~^WZG# sՍ3[ScVrYNmF;G}hF@ݙ̞̆֩T?zaUl`8p9K?LŇmJ7 `+ '=XeS7c1r~NAbCL (Y26bx#20x )9^ʦ$NȾU4M4w d4Y`v# Sv|Gޤ3Q+0ݑCu:*5Yj"N/V9 $Rq]ؐo(\szqrU½es\m Pu "ur)EmZ 1Ax> F80nN 5w\n:{y!Ÿvy\-|8Ml*>b폟zwD ~۴}q8Y@׭|7?INz좍J$N&T6F)D;MM.x뜡DB0*QYíN*QBEu&ΨD|<'* CiPeXN%k2>9,h)D<b\p1v"8mq;iEm|S l63sN[繒KSg[Z{ɂAJ٪ךIR9E!qP^ q,tJ`U Jn)`Ja+"];JrDyn뜽kؑ=wLt0i&σdZt,y|G1β^Ry NBP[$$&Dm{a~ET;-(ƔG"ED[3.$G6*kې)"cEƾ"3"pjT,eAry QFI{tg(nϜ6d!iY$HZ.%%-!띅m*N*j'Oy͐txo?>#VZ7xyK~D RV l RbP~ԑWcàkj7Q*i)VYr3+J{4:H$sH 휲;e,Ak@Twv_}jwt k ?D}Y.ÌPW:=ΈPK h0lӝ3mu췷nogv颪 j{YoC^A5]\dkR Y_]G_njatO?כᶓӼ]|0ǃh=0"AL\=1(3Y`hHu mm"뤏f.gw(gH6Y>sU;匚Z_SԎ11c]VQMg$ж/ {9W0a;'j7673vn><t.F2`<`ދ63 sb߰LԼ|05֫ MAeRvnL=ْmCe |PMFc-wLZ^ hur4M")0w%P]#E'πv`TX_09:6li;tW;eqֻ@󷶳xecNMx-#=O6{9wX77*<6}=ZQ7+Mw֯+ ƋI o@.c wN˝6_xajM{g>$q9~Է0a+^=>΅|׻A5ze\m>lǽܯ/ 7?{]o<^ qQQ(N M&o?zF r2Tz/ /U~^ms37/^Tw/~ zSOtdZxVHL&jN:ɘ$#doC_%!/"ktq5/u6kZNW^Vܽz~smph͐TSLB2h)|0:H%@_'@xik\rћ* =Ob!AJ,p*@,C J'eAc΂՞(mpO!!vBy{$ܫ*-w)Ƹ - ]TKFY!4GesP۱9ũE"(xDZ ђg N}>l E&z(!0$#v#km#GEo6/[}9,e/E[,y,9 p[˗deI$[-ͮ"b}^VaŃICM&-!o1B(.FS"d2''aZ"3kb) >$R;2M'͖|RAaz ;qF 6*osp,d $ɴx x8ivIgtw[n+m廭|ﶲUw[n+m|V۪kݞl\@Z|Vw[n+m廭|$Б:NU[b+_l勭\b+_l彬|/M՛[b+-e[bbV[b+_l勭|/V[b+_l勭|/V[#/V[b+_;>9foyu.m6b~iXr嚋Y_A\TpMJ"Hk`_Ɵ}S^r.4=x!^z`+e:G\92Ypk%Fl>;\ ~ WkK x :TrkdȄF]F Kft_"Ҟ`p4G}T__+c<(H^  v>Uٽ'ʂPϊc=cƕH!ĹlI\Lzn;|zzFxz޾Bn2R] =0ހF8Ԣah`AKYqܞRq¾yayxauO9ZUPFXJ!Ff8ZCx&KF/:i%䵼>PT )yL,+JuT,6fhd=SH6~=Z{+1qHxs7M4m1׊^XϑcQ! ͛bLS<QY)NnG}ud!4Yh@2FA^UÙ8%ɐvu! 8+Xu8| g]sWsۭmh6nRRqƔ`C+/IaO\O&Ff=lD"ۦTMhA6>xd%T&LU^G/YBU'p!9 EȬsf|4B+E+fգ7ƛ<;ğWp6&I.e415vv`+{4C4X9F.Z+Uǃyx__j_ ڕA9@qhLw9=J\de),*G.pk8d2]/V~U|MژjYȲq޿eduZT4[GBt|s' [,2,wZ)@.Nd.s='={cjO2Ӛ~?tEgUvd|pyڤ6 H^l[^Ļ7=msJ"<ơ9vk_vzFà1bɞPnwȝCl>bm:nG-̲;7C1[~kc!; C?jyRδE~0Y F72K'oxl4>؜5ٶ/߄7jZfDyhd{޾$7)Sq>]"OBTOwǢ]{#2'IyO ]oA5kvU~VЋ#B 6B*i-pT 0mIzwxz J)jkrJ9Vw)K.H0hI<&3xIs-= |&-EP\+}Spm'͞ &_G鬮f+3 |I0 R!hHFd2Ef]3<0:PwwaYicTL9 )SB@a.J,E}Ru;1g9cTT p \)m9Dhc$8+Ho81هu8-UMOM;i ۃRxt$\'୑EA {/( ޜЗh2x]H omI{N:3͎<1t}I[r)|}e38ȤqZҺx@\p&ydF2 !6}̂$+>ȥ55B="% &d.BAJ:Pnq<'pvrR+r, cC[kXnVv.6-K%rEK]rVc,6E{2dK)9Ć 4D5$hDH`\&1)[2{Q ?6UjkEӡԧJ|v11'Յtq1m E.1H*RSwo9iڤuT< iesSGz*8H>S 0ROa>WGlf4_[N ʲҢ͇/ypljg,Y3316̓A $:x`HהwYSSmAL] 2tF \l =T!T*OW-u7m?ŧ"A8s]'>ax3~\vih_۟-kWgW$WRb]G&Ȭ8z%tʳ 0=b_[W?~rq:0{b0&p5+pGËJw7kNx2͗Hⷚ u$'POtՍFu?кLJpޮzY3(9%eR$kH\%C}*QXKx̑yQUEy=,/ uP߲%Xl={ϼyV@3I^.^ϊL?yFcuكUnB|MAGEURƼUcڪ1ofXV3)d^cNJdOdZ2stɔxH*J>J11Bhd\3u`1 F&slFWcBH/LQ|LI2w ɔxvn9WeMB@f|?$ w ͢=тBo4o Y ύUL65IF$1GN GWzBwA |sUPeG |5aoՆ'fJ(Kţ ySbJ,䁳Ba|A2xLh-SM, J$r wd4ߎ4^'TE:jo27&[͝9O5>9(JcQO?crL@r¸ЇM DPI)qwQ&Z8xQtr9(O ԇ,uD敱!r},AKCw`mK,)o5&U4^zHFyQX {ߍ(hIa$gΝ&FϠ\H biѻPut牤@ޠ^gx&Uu|皧̘"\~=HA(Q \eK&3)c"]T۶^/؂}FC*Bh;R82L]l&.֦/%M~?r|(0+{Y/7ܜ.!ql |R_ͤU)sەVM}aF4x\>\w/[$^0O07<^Bĵ  T?LXB$UoRԮ?Ah$;Ҷ]qIFy1q_¼Ni;6M_]7['⶘+_>M~ -<^#3Iwh6.xo`lxSvY7j?q8v8 6.8 YGF,2)H)œ [AJ B3bLpv[D {8EOVgbZn]z(NYfUz5qH Bʼzݧ?SlVdgy (/aV2:_ed10 f1grH8+s_Uk|uJ/!1$*8 `($<3@c{-RFXp4W7s+՜۬ 6Yc߳kfz=?b5V/SLVBw9Š\3Jz+thZPss&^u$GAr!bގ;]Jqw)ݥ'+:}N{gƆRDK$qP)HXxNSEY\J <}J,KEA<9)l|9'' yciB+ћcbRG&T0W/q9 6NX>MG]i/9v3N)bxЄ#XRd;x8<Z8l8W.::݊]A۩J{L*%ʸ 0u*`ބ89cSQITy*3hnybS鍩>GN4絏x$-%9#i$s\ ,zK: 虧s0*+%袝y*( @mZǽu>75Tk ۈ I_^)% Y3픢Nqc~`+\`Xq_қgZZ1bJr"h@QAs6Q1,%!@/j@CJr]Mgo^&yl*|b7ï=g,ˬ"7HX"րa&O;04kQ9Aq)j?(OsrQc"^hf]cS  n=2Td";;eH0 eZ30Gk1v45 il:-=jo>%kmyPA[0ež~XX{k[t͡B(+Lrft]k]OO;7t/$wNBCJ˧&B/BQLl݀$ :Zwrx5v]].~=kot?*J-xuRS+}r8e/`?foXw'RpXJ^v@7.\"`~ qάQ1(?}:*8W KҊ% fXmͭ* BkAL55AR鄡ڏߟnmHAں?[X}./ \Ma|i_UD_w֊;%Jcvp̳KOf=`V'*;:-np%%&zqyq[giAygg L.ZVIMvvpۇq?=jdM. ˲֣ lL+4BɒڰT\{QQ&{Lk_9NeTt́I:|u26ݭYB;Sp"9̰4N'3 TAP$@2ILPy, 2k,9d4:-a-PSs8ð𸋠$iIMp:K Yؿ۟J5$DpFyU"adǙ9gy4ABP/H RSE]k4sw^!Q8a3h%q[ͼqwNZ9GJָ/)x=ٿ?T_g@1:n`bR(Ug E4ygJ~ Y}ZWAX1Ygz8U"1p$iiVe t Bl.Swf"\Y1 ,t^?[TÀ2!]*N/u7] tOqS08X|\ Pky;n)7El-WWBtQsJk \U 8-_m~=9U>ݙQ"w[gUt( -CuP>x{=\pjm .'se~hx-w] eyY oIQjƑ>U0yBևܟIOƣ1WqTvxɦQޕc$r"处9_'3.B1mcp6V=׽NAwt*'>T ӻx}ӿ~Oo߽tOx D帥3;tK[ {; o/ )W((i͕ə_| b~UUlPꯛI1tS,f)&\ 6k_Sj*;IF }3 4tk:̀VJ;v~b{#'ʍ/RU}d )9NJzb0td:Rjsfȵ1OG&MWܜ7# Ho=$?;u^/-`k U*^1F: lp69Hy|*,P2Vكo62k6rL\ܟ?lI)z=ކ,Q㯟CW>Y\MEߴ|O,}c7Wݪ\<u4ⒿVj!(jVLԚYZk2i~3WE-};WHCsKF$zş6jC`&I lE +Vfy@x YhSQ),Yydcjsk/%%\2j 1qc1g;JaظkCt6)g/~&_*5rs(G_rS6܃ Q y" S06>4A ApN^ݠ~lrɞ~3xb/kV{žo8\cu4/uG-=/H zG_G _T(:@@rFϵ,wJ8چ cl`"H8L0(,^1w.^aH[;hXksT7}[ku&[;z0Tw2K93-I1Δ;pGVffA*=m_N_/5Xajf"IAZ4]VXMe쿻Ec`t-mtM"mY_fHK^]E$`1:m!$}Շ,i᧋޻x^-cW "G٣18wbB ! r0V*D4*zo&n% nXtB"4;}o0[K'SV0xZ2gOf) h3ZJyJ|ӛ|_]ѧyP|cb%s\'s+uv~y[Mf;Pw{6iz=Nf_4A_= C,E]O\68?&BR⪥%-cݸ!؋\>;U;ͨcw}H 4ɉtțto7 YὙحę+<4R~O|Ks8oj~ :l37oX0I*0g_9˳$A|כDO&ӝȞ,zoʾt;[Pˁ^j/b j-yձDñ990,:ұ [U9{15c=r-zyW>:TL2MĻ 7aҎa޹ chp\)y:H>q'c$K`^`t@rh!FX <TGGnP+iVKKILo=18- &huZ)8b;V8+I-\^x))?N7u~C7.onE'UdJܹÀ˰Mn&tY^>i(s1s)Уi=_'t~oEn]d2}u"}239eпhTTtEO/OWONt~;4Ɋغ{oۇ|~D+OnLJ[}0՝}ކ}sPLU˭͟6BXx4R誳_EZFn]PσA= c̊,;YT*cR]fn.{]r0ElAʚTc EUAeSh.DmŤ \D룥,iU&b(L]\IIWIzVC8]/L*It?*$봅br((mCj*Ye: #&pSuJ8"k[[Qzt\NDrSދ.Xn3~M17e<%0aVfv=;'}׃`z0v=]Ʈc׃rPҚ7#մ`ϤrKI eQK#*tWy^OܫY5Z5V~2$n\dTQ+eŲ@Gڑ2zd(1[5mKn%{ 4-mx5!̼!K@1GH 锠;BS}Ii{*XR~釂0*+ѷ-\n[bOn%-snQkHX'wztJng8n]8<_d2oY{*|:r"%ݞFհ[5ZƙUE!l-BBOt=Z3Z6Z[[$>kåzɃFܔ$xr'Cm,~4KVrvNbE{_EYroZc2ʘp )`z]>:EJ8.mH";ڑ^Odñ4&70*  Fifm#2 "g{.x B+ۨ3&*YЦ fw|²To$ )$LI "i"XPHcɓv hWDccJ#)I5\GLIRE2!:Q C5;J(cRڋ#E5?_Uv[!L@`Z-KJ+WwVfP9as2~ DTL Rkp), 8؀\BȤbj>j 9Z֖cO(fPGfU{YI?T$Z=]SqjoobKrk7y0tZ^]K #Zd>ƷK^*r#2CBg#9RIh\ U\OC]/-,T 96\cY@X "#BHJ$KZz8n$XzSgۧ`pZĪXw|yp*yD82ɠyV d bR@b6 Yr3 ?hڅ*_ҹ)%.ިR4VIϖev&} BbD #v i6fIQ"9G1f$GjJjn-)1 ^ 9K{h _Wʅigo*5y978_~krR!l,&i.z #/7g 0gHk3~8 zkd}MH˪☢H1~v>i%p&t7g)1˳4XrHX#Cʥ)5U&m\$[vkǂAi8-Pyb=GҝI7+W3]|WsJH-Y)r1q20-6N&9񹜒?ުQeoޝ=͟/.FX>c}wنr10>`+-lWDhho AEƚf83]6MÛZED1 G5"()>nٿOmNA㬜muɦY*iuRN҂FĒ/q(C_ORo{mF#qoP-*‰ nB3㟈}w?~_~?~Ǐ~#poI 0M4 h\9x ?h,?SS ;Ly|yU[n>p+l?,EnM@B|rǷ]J">)NR,~Q ^\|Md~Y[gW/뫢E"@5 1C9,H~P@eRc=~OrVpU^ ̑+.G`")$c"ضVXN uNsHzjoÂU5='U]TdCV"74E+}H`a"CYΞyDС;iPgWANAdtlOvW(72m4mЫ_RyWҴ#ZJ0WmfW謪*G.N1SFDZ~6l_]Ϯ}<]s-=k0¶ӴI2ܹqr_9%J*,mOg3KRr/`c$AD:Ay TnPIT!1'Lvy -ݢ1.1"3\*SڥM&p=H@*VU @p @Q q[!,ڨ־;nж筛A;܁V+oP~U6TR~6_wmmzٳ،ؗ?xmo e Wlyq)␔4)kĖȞM}u*+"ZOZD.j[I%@8ek[yZirV4q.9"D"t1]RI M8IШW`g{f^n6,3[HzvG=?.DZ,|H{*\Q8~#8@5̆׃b..k:es2H֑ܿE8Vz'e8Z?<= Y4Go7%}r,t.85 /XjȀ9S ĩ+p`2Qf$V@'DLcɖ׋C;#~դRLKڭ^5L/\3i{eӗy6~x90 M U!5PL r%%*G9K dd LMѨozz?.W#vZb}7 3oj5\׬gp] [\.KE dl?`.Y-Bkp3ZP<3(ֆ:bL|W0Րҫ EI=H66 büdCҶ 1]5~QCg a 8}9n_1ܐZ9|FI!zтVrZMy_l|.W 6ǿ޹Bm}k|1ϔTƦY1ZB!=8vIk^s|))j!ت>{R}Dڸ WSmJSsivF D_g@~>%Y"m*%,?Z ^f棗'KTb%Ebh18o3 RHF%S69k5Il,ѱixՕ"9cQEiȦKK*猡"X-F(ÈJE2>9,h)D<F1.h3:4"7Jlh<J,WmYPF#FLF7^+Ԣ׷FzD+xϨ쩌HX/Œo+o>|jRQjBH-,  YOkB2N e% TӜlQo^e<yiy/ϯN`{eyf47ڠk7 if#9Yh\*/IZbDD02W l H:QKB*B5LG` xփ!~`]H"TT6{c-~ M P85I*ʃ2\p^CQR?3P* 9M`@EZiDڋF@!4oPɢ6ZU/8aKӣ2f6fO0Z|gQLmUJh&" PV6*uZ";FQiEU q uyT^fŋݏCU'rO2#E`Q©' RI+>W'j oD:υfB)jcAwxpV[8QȄQ(]P`HΖ-s:?n";3E+xx5uhg_okJ)Bx9#y-w/_wA,ZFhq 0]d 1%iArC}9fQG^p4߀9TIpZ ߿H%:[j#L)*j 6 ъR%`qr͊CXnL> ֭Nxb "2+Q!B!i*PZ֒^|3-?hHH/{TJl2>ÜEMKxD:@"I fbPA)T!$@j3R(f *sE=V qqu b5YvQ9ϳW#דqz kڝ?5ֺT~'DQ7sΙ|Sr3c9Z8w*]ϝ%C29㳤Ȏ)EaǗTI)v<s< 8}STՒD HUG'hH٘nBJǕhKN7*ݗLAqP?T~oCbXܡ{o:d"e򵽙VWrⲻQH\o oi *@#W(')˭5M>wjݙ6.Fts/o&gḁ\oANϦ{.8:ۏ7@HHjGzaX0~AA`X'YFo]&'iuȺQ*(Ql>80FÑ]_cg:mt1 G^5J?ӯ;wydG_>{ˇǷ__ e˻(q;0q'M&pouSCx-fUW͸%׌{>UVѹ-W`Ynyn&ZĽpev=0{0X],gYn _h9Č|^?(*'!=$}>M<,lުHU$Q%*$4)Kٱmr6#($T)gZ#鹽 3W~lh|pFӂ 5E%97"É'P#Hou99 #H*T E2U'ʵLKM+6~i/K-2=x}"]I-H0L"`, G6Nڔ>OIhX7&e]zp|JB?#>%u.:k9! 1PKS Θ <%-TTdv SdUZ,^18 j4\. b)}$V(<|8uZ'fZ%ka1$sѸ:Ԉ.LԞ D0(5idā Q4r[.ָcQF~̺Bۚv=rB2BIjUYz*պ pQM*ZD)k_hBH DsYv&, L2VhQMVF;;t6kVj|vou5߰a)9EճC>qQ%B~rESi±ՎINEv׃bU.k:es]9H֑ܿE8Vz'e8Z?<= Y4Go7%}r,t.85 /[jȀ9=0"ALꜼ" &eF2 mDzM4VFn}<3\M*xzOy/_z5v)W6})9aZܴ9HPKH!_C 5A)B)'Q QrT@ dZϚM-@ʾǨr5bw{+-wP?#XuzŬp˸Կ[d@f e"W=3bm+DIwS Io+;YnԃDjjsm 6I6$mqU]xr\-[P?ND8,؍#)i5PbwqH@y 6v>d˒xRK%*=vb=)2]A?xY ?=ŽCu:nw~:n;]y{M7p~qϙYvo{~)s{Wbe&auhO'ӎ/ڤ.Ŷ3*m%.tr@_E_kmrh;ijP@P0|;p26o{jM͜BT;H`X:K+J.Cq4vg -՞ٔv™^lFm_F{r<L5oD~ĕiJ9N/ I0u9tۼnKSfT8TLaQdJZ$8kUn%!϶w ;#6}[[mg{̦@xnWFLzl0#“>\ˆ %UQB 6$NN*|~S4ll8mKը1TS 0Bl$1դ}G NDѧgߋ?3"m(k_v<80˗alSYDՕGe/:fjAV!htyA[j1h0*g Z:]ږ? ʛ-? --q0b`&p.OgM~F^X\]'kǿ{2Wm:ED>?QW7dCn6έrdZlWQ&ݜ֩LH:O?D !{go<ު< {wV$}z+Lq3R=Y-ftPT]K[1#2㇯pg5?p_soya|r`97e+u?LcMC j$ȎT/Ԓls!i':Ru>;\c}صݑvŔ.>}>?ny?>-iIRQ7W"вUɒ)>IYb $+Cm޴:fXyF9Yr(~`O:7gEݫD{UscgJTV#ΩT"Ijj!ȵs2JK ":|߂Dp}FPKk|#U. RFb|ɰ({B&Օjlڸ\<@@'4 ͋B"X萍jUS4n/QDg`Eʥ2+tjV+ RAvTT5(x"vDJ2*@Hs.2"Hag`bb{\G QGP3@8b`eW*eABPy&-Ki÷$ G\B֡Ѡ#. 2ƺx6@d qj V VY4fa)-gwh# / ٻZ8ZV Ȝx.b ~ zB܂JUjFJjV,M l0`' /=UA\HaG,WXM2XDO5 !HVw\օՀB\?,#W c2om@1#=ؽ}15wTY|0• X(Y\O` j^XG9 ȱ!3Qld %h*ԙR@Hqps$eYaj` jGRP6ZJEr!]T CQ@ (buR_22r 9T~9sr4RYha8PdF eɸ;o iDNZUhV04d361˥tkia+f#uƘcJ:;i "&D]Y`;Z`ކ0mzר+{KM mZZ$ >^xuu@Kq@y(}l:"KJ@J ФtVp1'8$;煵䋇 a1A-9J# IdDN+k^C6\+ g8/,{8=%$D쓇IA'tK+ 8v V"Y ԏoT?`jשׁl<̶M/0)yaM0M޴^YI*YВt&z6F4`=RMb( 7DR^@1aZfx  BRCX1zhVO 3IRc@Ak^ /BAvBC1MH#[͎V #  gAe#Kv0qkE10A`$JT/f'12A~)j880"vpZ!, CaXY7c9agUSH!oX("Zp aV PBc~4nl*2ӟ>wg՛B/lyؿJ{/+UW7_O^$ Vhfqy<[-1;\ht`Kvnܺw~Ӌ>xVNU$)g{Z_>% ]xkŭÓ:\B񏩳Cwm_2/x*y tKU?-D5QK'U3W_vaU)P$^XjjsG5٢Q3S)G)C Dd2#e+ QcV#.%e+KuK( f% i7d潆L+Mxn1jM͎]+tLҞDGz1h_$ϭ\9R/q' ?>'h6A]>, ֖S z?ԍBi3罥B$2Lye,wVz=S;Ic<lH{`=xcNJX"\'MJ*Վ%i,i [H9g Vd^sE)n50Mh OtlCpYSLo41YkbBˁiklNʐ;*ǂ'{",̴n-KTmrS,d78LߜP~O]d1< hqV&W:(ᕈ٨3{̄MªR&AYgA\ .nK;ڜkC=6LUJWpڨ7 DX>]*%߈1HĻI;ս擞>}v@(IJS`t$}0"(D &i*B;>f{4iON=AIX}-['6Q[q?Ku:?iLV:P`mQ^\eM\+Ziam%˕W &Qڑmʈ%ˋjjiv9U)1AΜTcAgER&%uF2 F'()[iXAe%JrL"iqFLhKіF[gŬiaFFW-NKb6L}$31!$KJFGu4&+!rbAn" +v9{hC`/)y9 "F8@m$%T8)h{PRDj hgހV#S$˖貌䵲B)X7{飑M&0{")!͔Zފ*uYH Bs>k%Fg=#clҁ'bzuּL yM=hgnCR80'y~(0+MY8$,{{ـ3C)O׻99֊>ͼel3torrU2Hf*'u r?`Uxcؚew*tQm@" %t\Sf2qZ9i'!V_qoh4ώÆs>s;]0~.b#4UZTWlXf9 |Ɯ1^q`ob;_`)k*l֙I^j}~eo<makXQx[(pR 6D6r;i9њ7TS :!<' @4>$|}Bo>Nޝ w1xi#u&MiʚMgNWR?rZt FpiTV .kn΁ݼ'A2lPx?f?/rHl||aq+Uvϗm{z'2hF/n> ^5EBkb q;m@YMhL!Q )AqGg^K)ήR,: !eH 1z(hkɩ *)o\BPƹIB MbP IQ9)OCD < {ߒ0i2ﰉKyqK V(|n|TD&&ST1ث!.,.6U 8C3 J15E'h*ךY +>3^ϛt7XF8<c 8|g \XU/ g[F6nJIu"+Fb5 <`Ѱ)@c*))K]qjZTl5jDF4LH$D0.ˠUQh渥DY-/"XшNLH$ vj/`K׭cnuz=ǖڧ,t;-[RMvך{-^E̿7'jF9)PFFC4R9Xh#K3J1- UĖn|Yzr<iSwW okާuRWTD@~OgxR"7*K r$gs 6ZQd Ֆ猫}rQH-$a(|BcS= pIiB)&۠DҢv(E‹h18o3 Rn0*QZT;[tsrL;8dmk8o=lhx9[mѻ!6껗}ZW}s|sØRj؇s-;Ӱ!Ұ/S͹7C*Qjþذ퐺BWc箮2RuE]ej B* >£HU>JDgU>XWU>*ɹ|T>b4~:jr}:NЧ:h2;^l?3xc7diF!O6O=J%y1C ִ;5yWkVsWJMrU *þ\MB{ϞrƵW+68Û2r _WK{N_?WsgP0c %cĩ<|Hv;` 3j:+IWt׫iҬAUM5)SK.; .2tE]ej>wuTZ_|Due`:`ɺB$+:gjٿA*؞]_\]D00r>:Zy"g0*W\ֳPWuŪzg*6N3S<;?~ >Zϟ~$QN# do$ISoO@F-* ަ '/)jʽWIM_>oLuE?COvjhQ! kd,BV YW؁+TR\,BV YX! +da,BVYYjWSW9V YX! +da,BV YX! +da7D_{D:+܎@p۹L" 5`l ag#{n-#-?dUsjnܺlWS힘`ܱTͶZZ''zPk6?w%ޑ$v DǙm$%,D#B5^cleQs^e m#"Sh$#Y>A]d" MLU REU!c98 gRXfAQ1xΒj=+"9_N'{ ^~Cjx{3v5R9? =~l𗷎MaO9# tOSI+'R$n?Li!T6ʌUxz{;ݴ!z_#]A!hsȊ}ެos2FK!4$4L.U]A#ruh$JA*D#F$;Z8fYxs7͛ݤTq`( a Ӛ7`kuIOܬa^QoQoQoёiԥ'neLA dAaTI'39FOS6  빱1t(JʘT&lroS;R8;^k`VZ+ fG_rCηѡ,RsRri9o;qpJ}Y?n_tnMNu:UmXrH\&T~KI>L&*t'fx3B7ppyFY!;./53.* SZFT *F 0b"E4eF1.:t"vC[DոuEp E'I+э.MQ2)YJ~y)aCz;;rS~zQL+Qb9E@ʂrJ6D@҉"M* #J!$7 UOkB2N AR-lB%UZ3#gfnD㌗BSX OH,=+g''~Ŷ~g; n&wG9jMLV/ADK&[ D*I`/e_h!60ceɥ *GI L#xKR)EnFð\./ڜ֮FumH$iA=d )yTkB$^CDEXģS- 4YuyY1O_?^iaHF2"pjT,e:#J\[E3D~˶NꫤUI;{I봄A( buU]PD%C*H`,(qo3y5^W$BʪFz+/P0n;fv99@ 6gg޲r:S2ֆ x•.N=QDgHJZh'*R^HL(Em18(\8Nv(Cd(]P`Hb-3 hlRH) nVOa[bL調(!4^(I-|o5#HHu<%6aNxѢOKxD :D<ѓ*He%A[h{dRU H1hX3ǻؐ ?mCTȼ:>sw7e`A_^JSƐ dM\eqˌϗzvI>Qin٣9UIbɓ?ގ.`(, k7dm~Xۚ( ܎˼~ (k#]7 C#[ ZÒ? ULR|8.t0f7^69G%hI64W$Qt4>\0fm3 'G;NΗt*'O: &3׻t7?&}?y#Ho`ڍ Au'0'C Vm M14!mκZumNaܧC4[=7%Q!Sp)~4* :3L?=h:FAOiZ+I8!(r1s;B o6d _A6vIhރL,E5珞|bO,| h=<*yR$=dҎ'p> Zɏ=8@ ҎRom]1P W]%p)9v7sݝ]%(KdW.82p^_e)h &+`jSJH3osA#Sf ayΠF:A/@z\+7JY2Ln˵(0RUBYr֒ik Cm ?~zvWl`jvn/6_%^uνzӥw?[N^yWRbbS4a cY S^]Y\0s,nfb3R+.{sw &uD9W>)Yd0 Ao?{M\mh .1ž|*Wl~M]nH/DZkA}3/7R uԛǹ3gn4^v0Q"p}xZzl<ڝCLJՓYOm*fGi+2r2*xXR: pcFI“HiөY(gACۊT%e˺3p{ٝ[%*"zQy[έsë>zX버q>Ʒaap}.ZuCz,u5-.G6VtOSd^gVA@4ދ.[ ׻}rV3δN-q&.+ogR0D{L|Lۯb䊛2uԜJ0<#,@zȓ9p19r?JP%a EBIN5\L>:|]n}&33IX׈>Hp7D2[[BoAƗE%gx;<e a Tf`c $M#P͓yɚ }LU Sa?oL|z77?YE_)+ *Ÿ c|K"(Pv(hLw-dM;-&dܛZyD`J|2\WSZ]vKL tb\񩰫-Gή5ձî4'Y+Xt2*KOOURɎ]}5슮  9e#i߂&߽) mv|'7~Ì*÷)W"R \J3un$v@X1BV W=rZxxbHzΤ$єjBՂrGy4 GmJP"Z2"jkٕJzv%=]IϮgWة+ٕJzv%=r]IϮgWҳ+ZWҳ+y$t$90lD'AKk3hفnLle-2(N'A; ~_iT;!2J.qlJc1F1sɭ`)$ŤXKZٷYZ_nM3k,ǂ8 )BFkdR6}tTB2 IfBc|Ǹ1T)b+ ,Gj`g)Oc[,} *۲5pV 阪l.͖CҬsn庼(ulg۫x j{cgȘS~i-hx7k|-#oABNU a1PW*`2G|UATSl%*"x 21I k\]KIfO~L/Uh|峁6+ǔ@P`.@ʩyشҥ9A8"ٷtF,B0#BXYL^jʈhA #`+ЀG!eLDQdnc]|o;J~q}=X6(O-?ظ;|6k K׻+odG[J_5xYG7%`"{&CD"UKISNX,-,xGI4 Gڠ$;'x|qo7v=]dRw{+I Z,&C3D4X2]|wy|(:<,2PfGa0,B3L%.?L (jC$Q DjjsϮȖm<.ti /Sw\ܜ t \0b?Ē/8\b6-hg,LXxg*Q?^Q%NǑ+{fbEoS*[,3{)eܰvEUV,uEgƯGU)**.x/Kjq]U&mh'UTr3i9P*-NG-9NZŖ쁓`uTiR/a(R0E thyy@1^``͇͎Qlo ̴FtTi!P'c7`_[/oQm:z^uЛԳMjR.0z=/"nSalkTtbw؜oI:q|t#bm޽FCwS:l\(h"ڢ3f4j|HbaJ7^TGZr30B}_ry0I_A6vIDރLTβ۰%w瓦4mdo> +ģ=O۳gӺm-5. gwmI_^p/gg` X;/k S"RvW=CR!))kf]z?I|q̸XKbTDSΦm9rHcFUC$&C9p5( ت $*!%Frq;{K).RQoY(v97X \"xslheFE^\9 98A 9D %L.e 빱1$˱& O5Zks TTYJDBSҍ-R,f wI<=zB 6qpN"8C"K˺<>zDLNwfM|@- BaLlRw5!!wF7nf8ξsa^>lȃr+E'$ U LbL.':S+ɾ'Tj'[S58-Yy9~`FN ~I1OYu.UC^[gUԊSg{ݯ? "H2 / `\0A%1gEH;K;Wr3>l3֭M=ߪi},&QTUZ䤏ӲP8X8i)D5ZXk$:#PcyZA@Pl^CErY#TcFxLr=)Zc8_"S%Ol:YwY__Umq`??\q~ءS4&zEo{"C5!>iÏ!QY:9c֊:#0ps2r) Ru4KeP"iQShc"a'ijb (w[ J)apjTrg1r6[l'iaUsTQMczW3[H[p I>&i-Sܴ6.yiӦ:tm[D$Mޖmm'H~eiyutL /fWM-_?\"1ټ'Wu7>loQجYN5 tkkюFbi·?tӒݡf*>KrޢC/inn49#`}94ȤC=fA Țik:V?ĵ^_zkGa#Ӓp2 @b2qVSDpI $!<$#۟TEnWvig>3_=Q7^y"ч a66q8?b? H?}؜-Jrbet >aRZ6i0,סH?DG0[mu}[J}wx%3wfC kq^:fs= pld 62\EErQEiȊKK'猡"XUQ54Q#(-Gp7qFСѾZ,_4n:7܇lC=F\طܱJO/x+S"q)(ST[T,@!H$h ҤRBrdR8e=k 8l :%*I% A42#g=2R b)[O/%#}{sC"Rd#v(QI:\%EԭO"$0LNQywqaK%8 A1TBuhF&J>R)EzvQXR.6'eQ{^ڽ۩@GNj{R0x",xIy'{œ9DkVVOB*PCFɐjўGX"J% ȢxD lxbFW?:}PD#\b xT ɅbAGO "ã@%DIꢈh4%D*@8 $( X8:GKK 5i[E79qS7Xc^v0VyLrO*Zh?3J8of7C{%Qu2.ԠmT\yCncd󤑽v'WzD;:v܇:|Hl Wi""0(ItD[OiI>pGEQHL(Em18(8N9Qv(Cd(]P`H(~#gC:ߣEqkmJ]if7 Vf"ph^hr*ZFhqq2r%d 1%iArC}:fQŏ8*iP$Η:^rnÜ9nrm?.j3<2GOzF*H9$oRQJB^<B1cP+ g XpaBlB?&*glkurn}1N_ {S.&4S OpO>ѓŧ&'&vyIz|c5gF i0jh.Z8-E2۾&gI SnAJrN?IMocg{Us‹eb8N3`ڹznDhѓ>gI+#{׮ JZo$09./|nrq2lcZ#E+&~oە7w?xwͫwx2xsq30>WT$AoM?"|/~G׺T߼ka+9لoү,9~dfFŖ0o_.|> /=Į~OO>p5Q#d+(Ta$4T톪ߦ ^,4|3YH|@%C~M|w߭Jh:t[7$Y"s`(pb%l6M1JqLnrW0Ǽag0-XLPRd\p DTTHv::Eu&qzy7&=(YMmsyMg'ʝLz_`Yӥ'7K2z"LN2ԜsW566:qڄ2}mod;&to;S)8u:P:k9 1PKS Θ <;%-TTdv dnC '04b(nTMA~!F9InP?{{z}q4P1@C@.DnR "{ZfA]Q1xΒj}BYo?GLW8ăSG:*>D,5o^qkML#`:ȡt֨}iRR7JWH~#c)2y(p%8Sdہ+j \e8*s(p Դ*? ks0p*2\;\e*꫁+1Z/ة&}5??~w:[]OzisƻFd>ϗNQE/k-!%W[ZG ir^9yi(k=щEFɢђ1ks2F)pP*8"Bxp[MQ9؍)<ϙ;*dr4v ݆pMԁrK <^U-‚׮jAPgG2q@Z(ekc$:RUK޶ώW; }Fh9ilPl4_"2l=[N>ns۳5xqPmJ:6Z^+b2=UUt`ߪ0(^T|cP4ٔUb3+JDA-O)=' sFAٻFr$W6؇y桷wz4 d%%waH6eUiʶ2MF2//XrȲ8+k*p$A vROS Q"ZSYjl?kx)wNDHUT`UN%s{c՜}8`<IZ'ۃl99%M)[!:a\ ɇM&{ɅR"(BGRdVXN賊uw{Jj:1jEEd.! :MOoy5˓drDV+XR!n%9HAP0btY/gTLi2^X]Ałȷy{Ľ!?C F< (tF7:d{X'NOlcSoocRtw`yOx0g_HJjN%V:nyLjX=h٨%V+3E3M78L7Kñ_~scIa&[wcu-=Y 9)Oo ؚO OV0vR})3ORs( <edw{`{тe+/#(\+}QD9'&q sM B !D0ǿBۻ.>@-j>ҡO0Հ>R&S"Pc2xǓ .BLӹy2Zpk%F 1^z4/ա˴V(V>hRV8zƃWȯ&_ RM_|aWOZw=gsoX]Q5:VBbͣu 3BO-#a[aޑTZP/ )#$B\SwY9hy`ZuqI%CO=;1~}'TCW*#"^|*b(lt&RBfx+r0Y2Bx褕(jLR*YV*V, Y(-m1 F&s@f5q2B7,)%lYD7cL¡ob٧8帢Wi=GG11РMC1!tL[RpB3v(!4Y$рdB^%'~\Jx3 qC#F8w >g@F_G'[[F1o\J{Hu$H:xI HE}(dܨ F 6^ d7M(W%`E3Ig.DG?9BUA0̊ =gƣZ)B^}* z CP}#8Ӝook=%cK#B.D~˨IfwB9$8VU{Ԍ1w8^'-3[eo;'~ 17gy$*nM}nDߚFNo2Ao)L){iI}%99 9aI, f,Z( ,\jW0(uLJ>%2ub.ч8@89I='=+j9m)מn۽:s Na2?W6]S69CGK/wfȋevKӋUx'70wV'BnmwUkn|BۏI_A0eQnwWdyٍxs4lf7 ﳖntly8-ngYP"Vl׸R+H̩׊%)}V`}O?j||_":K_UeoTcYYdD8_/~ 2]zy%1xnyOٺ1wZhC?fAۏ<&>XuҎ++34tċq98ia3^$!y,t"((^0nDTu+% oF0פaRJYSL^ uP1,VϨ&vMKZBX)U'%'BW}V>mUC! #EAzsI`RxC9%Smifjkr9Vᴵ1K.$h}6g&E͵+TmXMq3J9-63^k ]e[z[x"=]N`|_qoz?E˃?h8};#B`J !Z*0%V C"Y@fQdbe( AHQ̦5 M&wYNgLyZjb89b͎WZ5^c&rŔs>dPs\"5:kE]>+m<8LȐy(![CMEMT>!fS}F8a/ Nx:>}Z+[D["n ҤW` ]_pBbpZ*#h-izRxV&Vrƴ]ֆh@bgB h GɁI 0X+[jl'UI%'WwxClF~MS}fq2{'VrEImɞo͐h4p.7O43ppwg28; ޚ6ޓm@YZv<:02VG髒mIC&Cg#3XF{#g.c~&.UURY>%. `B!AJtLB9C}j't~uj-_H2Ć֖O`vQ,.v.6+v̥okyԢw.!r1D!HrhE?2cg{-M(w:Hg9.ԣbP[ HĐ˄W OwQ'MYRd&( Gz*8):R@RmJ*x*Q2W7|%MQT◟4ucO#ҝ_oH{Gm^k>}:]ߜHe${%e=Ý.?{4AnЌcGMud0~yQiq`~{ 1үmo+?~].>^F y4tc$'fm4V;J?c]=I =]wu#Q>Y~!i%C2)N7&EpHMneQB|6ZVQ7 ~I=Xe#fv#ADS5,;qOݺ_v^y-Q$PY9g3uYvǜ5KaׂC6,1S\q8蓦%&)8* 1E7'Yd)PXI:&M*:<<&ĖLͬf[Nll|-aGye"ӽ. M𪖿t%Ia9/y4Rӝ 8i^SWs5ߦQwk"ߞl:R\1;?كW?5ԛP$,ӶeLTjdku+n*}*;UO"0" Cߡ΂z6ǜ ː#JQ^,*D&%) 1%0Y&P8]2F e@/<2"7+)JT5s6/H&N'__eeGjL^ =C/>~7T;N;js?KӨ*Yܧ"-[g9Uw`o/wlx~oEM6$60m n;7Go4(,/e:?/ԧ QM2Ct< ]AtE0BYXȥfۨ*1jL I+QBNePc2Ӌ?]]U+(ωC s\#}=EF!_mĨ9}z+iw3P\s8ٙ?1-'<< 6s$cpM!Q) mO/X Q$x CE]Vxz \+uWcS+"m̶WLRR+ԗ 9̰zt s d \Jwd7(W[ړגj`;˖n%d)kTQs_XU$4JSU$A€B(-%8P&$Tx[ML#sݪ3[cw9vXF3`ď?l7(m>ߗ:5hskgIT R^Zh>jFcq_O[v-(g(Π]t\!b8yoS8@A(,fύޗTYm jly d2U=[>vqP1Ţr긢Elj,e.?ǂbsLSb찆7, r&޼+\!swͧw3¾ ~?!/_MNZ&~?n _k̻Yy|_i&ɲ-#MsCw샥7o!ۛ.tLfŐ}t!ލV],^-^ki6Lnti:a@z;z]z:ZtvFX~Z-3lo F}hW#$QC2~h}\MM- t8_\j%/gtd; w2+9eN|D˖=aAENAC h)6֌(RBqB(WBff"қB,w9\}d. r{'`+yB k=]W}% ]㻮'ucZf9UiK_]^?duG/E뼘mхmRˎZ.C77w>+V㡬Uoyw3?f~\վƽ膎{i[zT_6}lG汻KsӖ Gnm)HKRI_kM5&Z=&oM5&_kM5&_kM5&Z1jXakakM1=0-͸M&_kM5&^@|cn-iM5&^ qA@O׏*QW1#)c|] .b^~ZM뛍o2c +=n'lW]dO?ˊ+/aoN+5?- K Ip>Snn 'M6$8EpLQ[ 3BQA`$,$T 730D959ZIV+e/"RIk) lnV3g>)J]ml-lL)t\K&?x**? &F,ɃDŽ9eKJy&%Qf-ĴúrbL C`VylBAO6Ze "79i\GX͜ȸ mUBװpXx؞^|dY$#~' Gl@W(;WӼ~ݬ5_kLYMJj_i&ɲ-#qgxǜw쓥޼]iH }J](0dC"{CwsV]̨to|Z~Zѽ)40s_\?Gw>@;Z0}f~5--}-]-DZFH7Kވ<8>^ZHb˛[~$ oJwCg+W)T%=hpYdgGbkn UJbF&5i$j !e03"@dΊ8)0OdH0 P8ÔL^j*9 ^y 9k*&_NJW^;Z DfNqo ZM4&A`Qmk5#9%Oy!9a\ 7HxɅR"$YxL,ke|D$ɥ+?PW'bcf^"c)3Ddk.lk{t"'*qש i~u|}6rFAsOb@pib+ ({FqKh+yw 4/dm;Lsۺ.bP۽82ݿ)z;>αFUs.d; X)hg[x6R*g, diLr5k+ nYVTٜ9/ YSQn+>l<κ\A5o_H6N~׋V,Fn"Y#98zD8^{jMzE ^!2٣u ma@!d ,_ 4r mb:2=2WW3+cw9񽟔N?a}Ee ψ-ijb[f~zs9W01'P-'^$s,EC ʐ7D5C-{bNN?/"<    H. wJ%A 9$1C֌$ ]F8g {ʿ\<Ez[IΈO]HSwcX͠"" _pAHR%g*&섖cTڤcv(\N iSx&Ԡg%4yiȓPEGHH'{`$L+K.*4Hw-Sˇ]'BkruE48Q.>d0AN$Zc0avwN.ffO󯃫= {6i(^< 6 e;6q;Xrfab khI8/xةKoёbD $F-^p!se] "HoPv5楢e> Qhsot@OgmS׬Waikf%ru ưY+ :nyH^X9Sbb@MZzU@Y_]J-jcc[,s"uyc  W(]?i繎tKwڝl6e=hKCΖ<\1}k EG\(6PR4]m;˩ey[z)k>>stYQ(xB$mN2xdz .AX}C^WTI!)~(&*;_$0T*j},+5*'V4-q~DpE[u< I\ݱ}CB%\)IկPU<]?QI7fnMLrCb&&_ep5o4Dw8p*vȅ7v[v(l2ﭶ<)9ףz'1GG&pL7]f_vS5XJy)l8dQ~ғA?.;DQmT2௤lneUv'*Ku(!TY$QdrFA 6/Y K$l:F`oҊ7I\G{>r;>#|:q>Qzd0|;`?4}kR tv7+]{'U"swclrDnl9O{ٯŻ4֦w (nL`R)ie6XR }$.J g29rw; B8߶bArKK>,OmT;8ia%@/<3ڒ)䊊J#`0s:pMF&W_L^ ra;v7;iMgWAS(0H|ys(9;ӓMsVY=]\NmPY}}fYG>Yn~RO{Pͅ׏ޡT4 Yc)&KJyF%Q& #.NQ[c!rE[䂠'EL.S97)j\R]#cg}`5,KD#FGbַJ*!3@^4dkbbɡ8>Ĭ"Crc|;:cmՊvi!88#Ύv<Qw02/׍IC!3ɑRp,h3 K1JVR$.=t@PpE d!xmR%Plq+"Khzz:A"LseL}B ܋^%u55=ǐӡ$A"%>;̉`$_*;MAȎ!EdH LzC֐6٤` {uĘ5*EG4A8pxDi8kUAHh gJ=? qVOeUc7??sT/hyJ.5=m]R!>hMɨ|Zy3 8қzVk .TC \G)#Y־%3wYUy41qrNΌKTɨ|J7K_H}G~X {0,1үR?W~/7[%f#i\q c+خ, Ɠ׳:;R`% -zfD{3C mLIO4e,>n=msx1歭 E'Zm̈eӲÐ6RV>\59i0Ƀ&-࡜5դ6/pxyNg_/ﻏf?$­"حEVpBM/hv4hZZxFBtfӮ->4ڙrʭ HS,n7Xzl8Κ *p=`^mYnqԼ"n/b !V|(1%zU?mM.xZ.x^6?<|߄-Q(PI9"gguYv%;eN ,r\$j|8? }`S )!(̊@F$C`9VB`ɦ4ͮ/LW:a4$SFC 'mvP&qk~^@{.M>~iOKN)uD% I`є$,J8EZ$aؾ$wXk|P)j7a,wiJr *$뫠AUH.}VʉSX+xIXB$iKLR)cG^Yf p˒n|yڭ{ۨq;IM7N+Z~|\:!N&ʫ&_.L?99ܞ9MԸ͓j-jt@dʖҐs&mVw{26vfU/Ymq&u>1#ЬGZ%CK6(-Y;\As} D<,}$ -02,-QYF$pZxDe62zw&3j͸>Fðۼޗ%vtA[ۉm[%2)"z]<`P^,UY<9E''8#D(ܨq =0Xv"QeǭI9D9AҌ( }KP0QoqGӒU,WyJLϟwj;?){"]^2@ YhCrBK1Y2*omԘ\}] 遧\c=#Ϸ<)2(UZ8Xt^bI $ yCJΉ>[=t<{?[!183rA!W,s-r9&d̞<{Pvb&O?pm%#3?T!cޥ-6\q(d)^Di2/#nnzPM');q7#hL*IcS$gT>%s*$9L+KM*0#6_qX$?G#n 3%uٔcu=;"bkfJxc r5hD̈́Yݍ\tm<}w]O py5ܘ㬠q"5Zi~"OatO|6O,ӹyyBFfMC 4X?ź-\#E.W/7-dٙff \ -D=[l=,;nI)KN:@["U:dl ɀ Gl't}_|evK}URI5p dLi !d03#WR#y1"RБ)TJrLZxK_*() ^9T&Pu}xq>8cgbel*['E)VS &M<&59h6omcyw ARun[NC!{_r*JKv'ȒV^ nӪf{"Ի<XjL+cr= %XB5YH5D>I^*>C-F.Y4DRgΝhE; KྤWG Q,/ 9dޅ.ӌ`kLx̼D[K,/eOGiR>olIHNs?9Aj$d *& )D0OO!^Z×(=SSK)«>Үj,M(gJT|Ԗ` .BLdӹe-8ƵY#sZ)Mip+(^>A(H >`JI3 1Go {wzwqdžl@Ը%w?VTXoGH鞓ġcٿG%ACru2\J8S a[%bC=z:U?ԋqB wZ!s2xb KFi }#N}_JO,WUPxݭMJ3d5Z9,Ex褕WȹkDR).T)T(.P(X+{E v^c5q6/"nnyC:²p֤)ED]LB~9iˇf֯4Z"/>S)$*_E\^J"-CUT}o͏8'c8YU؞V*<@h\4cMI4 afAkW _ǝj/wǣƒ}U9縣g_Fcl"~.4zTکG{BJ gTgO`<75U|=qFS T6ȶ) $4Y^ ~@7 ŤB}EUA0̊ =gƣZ)J|WP۬b Xuq. Wym8LjdIbͧTОCICk%ZG?4&NjzB bA9@qyӖztrkkRIbyRaz%am$.#wt7߀ήcY轶++ ߽bEP0YUpYbrӾfǼnꗙ}-z %;? d0Oxꗭ>Ox>Ox>OxUU9SR>G}S'< O}S'< O}S'< O}ްL#9gz氞9gC'T9E]9W!'qI^JyVC!/Rn9L9\'i O.FJN7aWU,"-+fV|p۲jc;._F\l{`'`t|g֭r|ͧ?~ 5I~LM73Qg$bl=%Z9tzRd;H#0VMb%HQH. è&F) d1F0)c&PN=hʀB+/|f E"{km&T0WZzlا<%-?b$zNΑ;?D=kJpN`Q٬+RW>j~J%m|x3!ͼ"3M>t)K# nsC=>N:-v|ySm<,3+S;j{d;;mxVoҒgAhpt5)wtf# F1Jfc!![ 5DԛOpVHqO84[谌XʟNA!sH1F :#qd$lBe4Wz?Դ!qC!\A J|vRݓ8MAȎ!EdH LzCkHZBubNZFQ VF8GxDi8kUAHOё|$>it >+KTDzQtur14wgTFGe4ǟ[%=*^/?q71 YbG@285p5*d,k_B G]֔)j &.щRrG~2Igat}4<;J#̯-%U])aeWtԍmɇAi<-s]O>84BUzp['ѾkӳՏ$WROG#.r&Vi[>eӒ;s&*&:L#.kv›=H0g8'flWWӖ-rpGi0ՓΞ@YW7@6yh(Y|<^ p4U.׮gUuRvFJNJXyq̏|h|Mt߳Ɔ/NIDo}/?~x\oЌ'iB)}z ]tmkuM-ZO=m5/G8C\NIߌKnb')N7&!a A\|&3?՚N8ow"nbBlL~;>h72mkMb->zJ}+~_HB(PI9"g$<$결 gN ,r\{a|8?}Ԅ\0f)VP(̊@x# p!D@X$9b_ <"/0FբEZjlodHQ/IEتb%""#{V N+:4LS[ 2=zдWK$bZ89\#Qz B%7} QHkV-',[ݶnɂM&]RoT(E$xE'_DaNWP.8L_}=>IZd9,)v!s{xeHIB^҇e -X):8-i W HDt[;JŃA!'HOx/i`7`lrHWA۔|8/I[?[;hiFx(Usw$zP S> mONh)sH :$E%dWQ0[?c2iGLgW=UOy7͓>r\ey"R!n%G^D!G+1Vp[/us͖{-P|DΤDA5Jf>:IDDBAhQ+V^VJl7^ltrW!TO~a0jU3[*d4 8W`b26UPO/*J~PtYȤR`U2;v|%7z/O~Kjvby=$wi= 6Ce-Km7%q(<0:#'CyEl$ܐMG#ԥV[iiE6VˬnQ6ٷ>½#܍ \ss;#Go( )v˨c0u-@ '=Q+0hV+X2Jk!ЁJ^v@a|u|uKGC"Q|A&R 3du. Ƴ7Y2HХW6СT )0EeWrΐ5 sP]{Cɨ4LC[#g7:$ ]=]NRo͡|<]S۝ss|} )y05V_yʖ9rxXJ2sWUY(oQkq#g]u{#cv{dn \\o'c, 笲Ë,۪qzvJQ/) h:rKB]Ut}篏 lKEuUXUdURA>R G V*:v,ꊨ5PE>ҥl1.ky4ꪐkB뼺*qԬWWR]RyWngRVϳ_]?p (U6egO6C#^NUq^I6`%B Ă w_2숔4.Q҅Z]W҅JٛQI[#RW`F]r<uEP)m>rj#H]FQW\E]j5뺺*T]}?J :YEvu&2'A̅A$2q&3s-{Z#giCuu=;}zu=Bcn6›|L罵Ð$bF,QyN \V1hF?!wh/$VЮ _瞺v\;2btrkc69#6rcHBö\En ^9LfyS/嵣!!m-vEoYe;FS.m& UHcLcU40Hd9[sh:a9rU%Mv<,st$|65c)@+F2Υv2[ŝ&%LIL7rRw qAnt&s#'\m-r.MBq<6+?cNn;,x'[[?{ĽZ?n:Ku. 7쎩]x'WzNxIk{$\yumza'D `@itE/r;&]nɌ~jȊFV ?}2ݓ7)u{̼2r;?o|۝:ϼϓg34..\e>ۓIIηL'~O6/_H}_~_?ٻFn%WtG,ȇlMe`G;ɑ4-d[%K`[ݬSdթ}{o LS(+MM ః.ۿ?XtKK .-Dͻ?Mk>v1k^ fvmH?z/SF׷Wοݨ.w&EB \c/|&7xy.OZ"@[AGوj~we"=5 i ']~+eB U>r,!8&. VCr9%0!8"^(7vAҡWn0i)8 "-7u"s*D`)l])x# `9{歖nSʦ$M?]8|Wkvrbek,;+,tI\«tˣ[ =[H2M/ъ*L^5RJ߀.lqLtU/toR:)Np(+]l2'9Lpq֑a{cZ^O?} I@n746lќڅl-ĭ[%Uzp'F$a۬rL! 2K%J. 2OGf&F) dC`Rf@)Ujp\AA쀻PG5s/H&G'zcZuSqLPaz2[p~PU}_w]<ÉQ5K)HV{`QaVr W.]al4:s|w.wn],IR 6b7Km)ц7:i= 蹔Gt1 XDـc΋4^g]TmTc՘CG5`Q &I!t:TA:KpĎLVk0G{߶,^[zƛ ̫ do~g==; ߲? '*|ϱ(H \fQ1Ws,**yEďNh Z=˟8njpΞϤ|-Vֈ F7׀ζqFh stNcNUtyr ӒOyT4*hr+mb6Yg7^: PةVE_'ү~e<y]:,o4ɰgIH˒1h"((N0nDTLUGMklCR%x)&' <9RIDګa*;j5s;jES@j=-k=֭8 :&_'a4vßи`yПL_8b(h^#&1AgKP }@"fQ6JU) J`{!EMM5D#l4b;6 !e w AsJn Ȭ%Aucq@21CE aML,h (1h"1cV q[bW3g=AMhӉ+mWFD!bOx ҠW` `[!C$QbED&N5VED1mb@r2> $4CHZp‡*6٠}W '7!Oi(ee\.vv'\+_@b`= N4 TZFFǶx=@Xg :>%k'Y[ ӱNYbz8(d4o X a.#xI[kIʖWѻ&yv9]kp&>Dd͔A2qIzcـ]2 ,耳;є"U"Rx(81%DxۉST"r) ^W3gMh_O3~2\Ey3 UhV}EbVqgLFMLyL\k&9$06z3 WQ1$n,ie)|v]6zM% 8?Ksʠ E dL1"C&;He9kUrk:΅}6 '2 St`;O8cr0iE?#Y;vt$vdu=ebY([s ;eb-SW*VZ)[BeœX}eYvJ  L e9IQΏD8oonOj+bvsw?w amszD) (d%Mr +`ky9"q\n{˨5H )KNEc%Z8ɡL#Ӂsa|5s6 {H|m'S qP?}tqO{\}_PvqeˌNK=V# 'SZ>xeyBd% ːXY)Z,CJP|wpe+=xnV́ b f)A ZgԳHQȮH #+ ^x~S/(|Hƹ4XI%ItɢQX!XREiDBk=%P~atf4!<뉴ׇGgL(tQl> Տ܀v>A.&=a|l3?}v:l8YsNoո=w`~M ԰x@5\'cf;?cɈ̆ Δ/ O\p/؞c.\#ňJsUku|O9'}ҡA eWk^P`s;}6Ux5mp*lP9~IAid}Ռ"CB U2Cl&iTtmY0]ųi5)VJ`54P5\.C:#SrͶ60aP}F?rܐ,0~Oth>g7Қ&A6of-5XpseA_Pb]Gi?LneC  /ъXvdŻr9p~nWy;:C5WgP8˩~O?Q9*Ǜ!7mB›; Ykч\Z) 4 2z%؍wCk06FJpV7ܔm.cf7+fͮrۗlQ-0ai)'B}t\@LS[}w[>.[gpE_荹|=OHn^nّ?-V#]X09+W ގ {[araddە=UgY %5boLKXis=@RV dHN6^ h΄%V#[].UI6$} %HH~ YÝ!Tn@8.R%/vE9[pZ-swǃ )Ԧ(`Ϛ$X=0+E抆ds5M?%]]%m\RxꜷKiMj\!fz`,]|sF }2v5-gҸ=4n`0T+D5(F w^4,^vQ.Q9TTc՘E5j&gz_2; f[y$& =yF,nmےG;q~./XrZ /:dǯŪ\DB`( ̎l)hd3D)JśW߶֎$J9 fh}A0G$E-|ZR1Z/BĨ8ۉ]:$g :>#y{4Kr$hvEH}^8 F5>W|tl ]Yg9OGI,2,PTJ #y<ż g+}VF diO E3 <$B S Ѩi$LTm~vs"pс9E= f5<$%(McbL oC޾3=בS +1/TmXǖRRu'3j X4R:E͉MP/GuP2/d6[<']уMn-Ե>,m74%+X+yd(т A`XUcR#/)2gKJ%FQRQ z29rڦL!'W a00PZ#c3q#c; ͌}7Bg^7x͎b㺨?G&?8b"6"W sl].tKbfC;$$egv7©< I T¦2 FJlaN8_$Euw#ZhVLCAfcOԞ'k;;` ОBeC(<ȀuVYU yd[۾hcEBffV4lkRdؠtd8! 6 16g;vF"ǾxDM3> L`I"bR%HOb$S|e}:p;8{teIf#Z u"b`a7)zpYY+x58O3/wǐIub W?:K'@ILE<>L/Nϯ-;#r@U|@ 7mٮrAqkEq5ۀ>t;MóutV$ż_G!4l%?{}m^BoޜN/.OW?+m{|rB;].ڿh*'6uc Q9}3n;"kevwWӏޮQt~W?L3ՔkEpO' u׷?=kתkoѵrCR|үik>ҫem%S ~a`B5zmTU-A%\b_͛T,AFd#AUu>cB[nlq`!2'i@LH 5p-Q0"&h$}io~ODRB2lP*(e(F DQqSɦ$O.f9gmO2sq7]y:[L+6R]KT$>:sL%߳FhJ舤_eT֫95 I%`t200d hP'oNQm )2d *G6.W7b`- eNepI."|cL-q lG]w!%\wq̅~J/6J.? v)Mll}?xk99?9 &pOIA ĚPT.%pH9yu!Bi|#000e15Std8HJl,5`%6[nS<"&"_ 3y[37pkv׽[#_;2%^/#IAX)f/Rc %CYK&P㏎!#x1g-珆l3c)CWD tt" :HXl:""&:oCi`+>|#)%۳AZ-LS#%]2 r?CƾծmjOHאv݁/}KY@.3gҢ$ :>]IlS >-u/>='kG.AM6҉K 6+}ƚdg,vb'Fd!(H`L39-3OFjN*yM!G@:SMni qZt$S~-q4]KMʋ.GvB쨺UsvTr#Mȫ8*Sg#"[m :4J)3eU>{5km>B.h&ZL[a!;Cg)D[͊²`JkJ@ʇD:USDMRԤ5*+Ti L >Vf8^{8X^O!G93y|)PL%17s~CuL+y/FT<B|}Yp]o9wm e7/"W&M:~YC$","%HCۀ-3]uj:Pݟf#UU1‰ᗜ#~}7~o7V-kìfR?[X'8/$(YG̳f\gOQ9j7_O/E? ޺uky>,_FUpkl먚D4~WÏfᬁً].p#l:ǏU9x}Vs=ǚgTj.yReQzrNzN9`:viv6JP \513XC#ws9rIăCx vm-YqPR|3{,V ¥Ah(;t~'4!6VUmw⑮W߾߆OÎ,NkYsd 'wu/|VÉ5jb~m{݂1zz? is`;W۪ʊC-ydJȔsCwW3SJ;8uX/))Kq5DuVՉ{Nȹx`H0!e^R`QM3-%zŤ|c6}&"ڣjmea)]Fh9Ň^OΦ%֡g҇I_Nᤥe`kaꤏ@Uf8{3LJF/PL ѦPvh(Xj#}\.itܨc8^JؾSGg ~ h4I]k{;ʹ#,_ Y B^>lԃW>N"urc֊:[`6|{sfO^q}yz#X/6lOnіzHv\5gݟݺn~7tcDv+{k 6%l.۷܏*z! ^޹:j$F ̊`?Vⴜxz>mwG>Y4عHetAѥTR x\I/)^(-0)^YJ% LT*j\e_ ų/sd_U֘]7W5\}J2B>yW(0[Huse|_J+konPJEX1W_ ΅|_-o^LU(U m^ܮGWCOۯ_WWI{kD55NuߡGdTi+/?8{d썕J/V:Kծ[ilҚ),{cJ틹Қ߲C))L_2ڲ1W(a{s;o`\5_> o=I`_izu$)2W JsWynY; .8%^l~TNF9e78\n?_&Fd>^Z&SCJZGg߻-vH H6Ȋf.wu4]=<4"Go6Z ˯|k'X!RArf%!g.Eҹ֬`9 Ψ\jo/RR `KCx,zS5?6e؛4g T3N k .䮅ܵrBZ] Ť'Ƥ? G[0Y`ef &kltod)9-[0_ z,aoMBg)Y qBB|So M!)7ťĥT/Е reGRAT+JRAT+JRA '"Kפ\A\ W*ȕ r\ W*ȕ r\ W*ȕ rܫasH7_y." ^n6E+hɸPQ˕ (H6q,Ps8lvW^2&mKXzoǣ{si #ձ-O2E`Q©' RI+ r|$FE[ D:υfB)2Fi!hǹ2D&eeh y7qb2 Φ6$ƥ:r]^xϗV?zFZhq?rh !%51&HLIZPs>,~9WIs]l|ArfM37z#L),$m@x gD8"ruN(*i63yl&zɺV4c O Dqp yoByl}H / }Z֙HHǙ0eU!;8D"IHNL!1 }R2@n3R(f :sE=V  a) P:͓CȺ:>~uT}'տ?w3{G~o-Ri(J8ןUgvM~t?+5\ P0ZA8"$8~yc KaTI)v<nps08SO~jIԪ䍣R6B!Aqb[q ۹?\: '|Ceq>5tMui#Ѽߦkp Q9u69{1UG]_:qG:uZWZ=o1V?+?4M?8^tӖMq>LӳvlۅMj8|=iQ;÷@HWK:[pK]ͰfDwN>Q̫h8 cUFouɮV*(Ql8b T|~8VTeqHh5jO9=׽<uar/Po.'|2}r'~WLɇ"EFP@{M}5͚h>{=uڕ}vӇ>o~6|3 ߞ?9]\'8ݮϳÕ0Taż.>wBn_Ƅh>Čv?h:y~מw$#Ww |(>$n7JH| jtLDa Ф,^m#!#($T)g Hz݆ͫ.8@\"3u g0-XLSd؜kx#20x qFr|9Pg'חo'S7zӉ6YHUwZۺI3Mvv>bizt9r/N$${DboR 5vV<5vrfrdao!~'^b/e9kdmlHuȰ !*YUeEk5دBeX -ԡvNf}03fŶWJQii-??^냒" hkM!Z^;l-)a -Rn ul Xd }ddT!D"DnХ.2&LU 9U!20L1fR "GZfAQ1xΒj}58zg9īюxqcGcxvTYqU̻p拗NH#C$s0F: A$꟦ TWN"6C_t>T6JbD/eNaN7Mdjxˊb z@+͒oo y%|ch5|3i?dv A Țik:V:k9` y.Q/jKbFG%'neLA d`;RI'd8!t_6/]+Puvюc5h!oxmOj ɽD*"X\q}8UT- ЮvАڵAG6@Z(ekVQ_x?_&0߹5| (PC1'^g(6#:Af+L4X8IC)jBfd7י2N]na8e,Ӑ1KKȍsݱ9ũe*xD@a9J(ABS$Q 5)S)}7qVOxzx薢|+{r=zӓgz0oPGtZ58(U^'Cb9DX* :P<ISCIVZsD S'zP!ǂAV . Tߖ7qV[|lao\[hz­¥ w$ߌ ÃP`ip>|;y!:u4IGhp-."2uANR^:aF˵bt&.%2 0FH~DdII<9ɿC}ܜKsRdJ~Ͻ!-#Sd˖貌@Zsrg:U—d'2!XEފ*a.H>!Y 4Yzv+Nwۺ=X ӁGJ"Eφ;';ր9u ^"R=|۪T͆J v2!:c*D&+BF&-=4R* `yIoSZTZfa=$8mY'}{ȚStaV|"eOGaV5g̾j;}>R;g7>;R0g,q-:sdr:pX0I*SzYaξr$im{g5I֪=s;,Yԡjdi ҮGn+F%LeC b'< ,\|Q)& (3,,7 *;΋&2q.3۱_ܙ: zzS:%/z׷s]\hHpᶶ;eǯ鿾ibC?w<: ֲ&kd %1'6f9hc="te-7˸ ,[({RF^d+d$8&I1Qh4D,ڗˎ3oZ.[ҏ3ûMzrYĭVc]^LCӢ}[57:|?DkǤ\J!rDͼEJځ]6?һ{x&wV[Cdm0J`򁤦hȫ̃g8zyuY{z##[}isw#dk1;Q*)ډ,? UoYIօеhM. =VNQב? ]X %;bEOEl7N*DiY(g[|;D' aYC.msBLf/O(Ρ4%:z!;̥_v6o;sx=/:>l9^ƅT7AMzPjZb]LR>x{;꒓VAx'he\ }Sy4MJCEfb HiN4TKz|`Jewl O'Wp]Th'i:.(֢ḃMQ{;eI^_'CԜp;^UG*ɒ8iivF7Ï0 h󼠏vYCzHH;MZvq;TziǤi+yS1j'n`ޓhKO)@ssjM+Byw]sX}ng_ȜM k4jBaztẋI f:ؘ긵*5r[L;atMx͗f.!Fa-bmb)`.Jt!3 f@[lyF~5/O蓍>`@6h22>p8pZc$!=;ʛ]NOBsڌIa,ÌZ)'@O:Q}B^`()?M]p9=ZMr Ls ?>[ @l36.o62ݗPm-5R * ]deh m4c'i8Ͱ#cNw9kp/9l˙P%EA A9MoEHM GvIxms1n[ZX`ZhU@#Eh-,ǔ!:pzF)y4ҥȡýpaFpuSw;nLu;,=iIW0Z|h⳾[;5gHWX%OJ -E &Pb݋ވNm)$MͥMȷ5 ol@3^ MhFm!G"vl+!!S,EpKh'WBLx \Je,`, "KJjĠ`*䐳I7_ZA6u.XX_0֒ސ yNFg),#V>k@vU/ntJ6f l o `y+ʪY Bj(Fg}oc'dcJ+; Kiv)mV雲&7PY@n@et@y:p1) >cYPIyy; d該Θ '* $ޞ Y}s4}t2=unڌֆ) s>헸F{~U#_-,W`yIoSZyr&l6JI4z0,Ip۞ sGc9d 4y6*f))&w)#@:%NkBr¯L^C`Mk7OsgMZݣ'ˮEnwt$H ΨRLMdQùTJmQzdR/Rj+`n}cM^0*kUvﰵ*yfkx=Uh^ZS@xukdi NQ(+F%SYِA"rp yH. : 9r/R(L@QfLY)XnUnwȓ3"LfvwΆ/Hޅ6_Mg^ )@.8ү-8?C?w<: ֲ&kd %Z1'&sV Pr{`@8zo?}W>q!5Q͵6~PӢ^d %dn'/u9S!y{;{ >%'׭N. 6F}sY*̮{iw2¾+ܾ$˻XF+) {n]9-na^c=oܨ٫h'$nO TWlOo筛w77oxx>Fܯ]й Mn̝Him[9Y3M,62;?nnlw_SltyWdf_f}l]RsMa<XkfFQd}5599ܓ.tqVK R*??{WF_w@|) 02;~` HBdcdWd%٦m9IludOO=z&hyn^Seewbbm%2:_t.;lxԍoi jc˙[ypy|r}nv]!$P=2ƫ {!^et~X }} *dn}yIKJu 2d @-I+vY0:v "OyZ\?UVXw4vg_3Tąop {zn9J.:<(^L|6I+ǻ>j::Ɩnj\s?Lrs;Ԁ>i0لpWj,QFxEt:&,ǸV9kd)b7w>n_OFY=B ZFYY4MG ,W&i[G߽64]M{X38ޕ%!a/nx}.|0H,ȮvG ~R_t`0:r";QAM=VԓF@}N8/PǗB\swZ9hy`ZuqI% Z=>p&3upLfJU%2ZUPMJ3d5Z9,!F$Mx*h!ޣfpE~=ioI \U ƖtrkKrĠ.w5OG M7o)L)6"pzDke0oK("V! 8eLJ>%2ub.ч8@⢰:9{2;NzGξ BWv g@Wg9~KUzd%ue7Rw vzk=7O%YhevGUxg狶0NN[:9oPEisqIGXMwwENhzgιlvil7.%Vn!w \6e:̧6AvW_n_RdaPcPjCY܆P%N?:;e|(*2od{OQht< MA4,ӿA# ?6wí{쓟SjI!:"cdZY_LJTUSݶB_gH[|ɓwp ̡ۙ{c+Ep>}I-y[n7mg.F5plt?6G@VȃK˕+#-Ke:d9Εe"ĽGμ: ZFBĜqح[cDYρrO =1I +" cB-ynR AޝHD0{Q'ӁkҀ0l dO1yR#֒34YՆ[Q|RϷ淚<oz;{}S.nUǹ F,I1T[/+921a59+<ژ%dzZ삈>E3xZzB]3ږpv[zX-&-t-|RdeA=<=_<{vw74~2]MWn3"9M+ H(R(cO C2XPmR% :C}왭dOP`f*#cCKk[3./w..ԓqkѢw>.79Ί M%ts.FYY{qKr3=A[JJl#HBo:}r^ oaǛ(c C\6l(tpnH#'EHZ'>Hw{>E{ +idrXj^[YʮNA!sH1F :#qd$lBe4bW)RK>ĝRhJ|v^ ;MAȎ!EH LEH :** sIkc֨< ies#JÑ^ Gz$)[ ?FߍwRjRiyL5? ^:PxnѩFzh*wM7Ax(datˑ,ya#.kJ`ڣ%svtb\⑟Y|9/kV[͒.p)Qeȯ*򥮇мc|,2|21őjtǟҗѹoH{Gھ_lB}w8;?H{%e)p{E]ڿh* cGH_:qZ2k.>^èl̇эgZ=n/!F s{wH`iONzn7DlE[zr'i=0SW+ޞ@Y_7@6鬵O4e*>N&z~lpA C3#ֽCZHXmvy򨘍c~Gni|Kdⷮ<)_}ݏ?T}?׻V ,"yCO?4O[u--ѵ5z9k>rOwGtq~[gv%U )?qx%sj)N׫&yQ\}Mj~֑u#\6юoGE< CوH?>h72ض=d?zJI$<w%*)B9 @³L.m!DY.&Izh5y?ٟ}`S )!(&I $rTXIsDæNMyZ\Z2L.fr'P,3lw `=6Om<5'Q7tbsqqG[ByX^2)c\::]Ɔ~Cϴ^9"=TF$u%V3ȇ ,(~s 28uq㵌 &1ǪФR* d۔.} {hM2GwV:6^8;Al4'+¯) 2 :ռ+M2YDFܤ;@ o$wq#`*jf(y\d(o_19xHIIJ"34aٮr֣x1S6OX7c_0K9LHmDiN.}Q`k[Z*8^UR`)$%P\9Ť730r(21 0mj5cr`̒ 2=)`@eb^D" ΤPLFe&vXTfƾVƒ{J 2PY^DgϿb<~:7O?obi^#&1AgKP }"G"fQKUi `{!E1RkFh2v m`XC,}QKO'qٕ?;2 Jc-0U$}maƙwM2WJϲb dW%T%R(\X-P&й))eih p]J\Jd$P\\ϟ~ I`349ѹ}7=߷o~Jz(Yz#cM[Wtz9G?}ٙ(U=go~lXJ$wTy'Ĉg$iAoTwyga1[wU%7h^v.w eV;NHo=;NKQg?!}oDW"UQŌBZڡ@ĬΘ@4LfO0'VJrJȃh'3%J dbIY8)Kݖ 3i=::$A8fJ9/<3}| \"EU$USʽjYyejtJ:fAkOd6 xjB+Yw QTc'c,tې:ϼ3hAmAK%{X$PfӛҪ臥N~#d+[ %K0џ~\1r<.qtOq')v +EjPoW'mc'Ḛ`~`.OwVÍOVIv}`wˈΦ@m6xxnt }_ކNEQ8rgbstTH.Z|h"/uPl2w&I61mO<jRlBfsdIqu VپyQi-Hg՛nט#+BJ6DWQ9F[Ƕ}FgTVΦSWs)T =zW+z2IT~໱ɤ;, g9)J7{|^6Ӽ"ɖLwH'/D$+ѐhX9ؐ 15<\p 8/zne8/ESrq?WN?~szֲ}^/g8MyDIRpDt[bp iC M(+ 999E+R4P"04VD=z* xT!'HObbC-,aG,3^ ʐ~WUu<,=Zo;ͬ$疒~lD=(aS> mOVh)sHHJfB"*Z]F?ʻdcxUlg4Y mUm@8`N$a e(?O tܡ`fyX]ybLs-k};bkEk1;Aiۉ4?5JFhCGݴ֟d(}ŀ?j "PT۳R #^xuMЅn> 7Ni\''*tg>'mvNwd R#:(߇81$ 6O,j!x)&Kʕh&z& 埧S+7eЉ` r'$lo]Hبսo(E2LK%Gمtg*e?/}cr<7j%v [,}.a&__Yjv&7;Kyʋ98_{/Ď<ʳ.3%^\Otvfq ȀaS3_._E9.W0ynO.kEŋeۋOZcZ%Q`K{ 5⛵(I <*fa~I 2NS `|Qڇ8b peZa}F^kƀ^b}uLJ>yG}: g]NؼZA)}Z :2ZW%ܸqb6ovi#9sP>}Q!P 3{IMriRYd,?jU{ 5IM.`ziNig1:o|zaxnԲW\T+v'׋e*b_X:6 j'{繹um2{ӻ p;xonޞJjscϢ빪5f ?COE\x+\+=4^1Pc-$S6ӝsV-?Z5ﰄ8v񃾬Q+'@x8'ߠB r!7 k P%_L08aAG`tڥs, 8E28˓6BLӹ0Ad,ZIk_\vx=>9Ttt\vFӲ 4-農D~5!0^roO_/+k yj%z ^]넞绲#8 f=^Qc)Se 국3/7FPOiDu'~@JmsCkn*8+=-w@(ZI%-eqb2'dNLPYZ],똽D# BnĤTJPI0ɒ Vƪ\eJR Ke|!ΐ5 "?aa2Ȭ&L# }JlVv7Dc̻B#ztk1|)Ny_QmWiD_1bjrՖz6gk}vr50@C(cSƑo-8^8.5e, Э Ի#<!cCu1sv޴ W)|V垕@bv;Q#^.$:,v(p*YNoy9*y~g:r'+Nw;Q~2Ͼs"~y2ƥ^2L}ʜQEDy6KEDׂ3N"bҨV{h  WW;!U|I {6"p] d3]ArJ*k{,jE'Ym,&UE>Z˒3^Hv98P[I_rR#Ҹ*9OUqN%.l)R);ʩ^D](RP"mʺpF:Q+BWT x=Ȫ2-F^5C7;ɇYM!I&}< sH:)phD}Ac<9E~=io9kA9 (N56x\;OMN!.C(l)7h4_ғRedc l85^? hW$^_;X$mH\Q/tIǁ̚ȥ@?h9Y0џ D?,N^]~k/]pO3HR79x@L eJ!QyB]QE1Ȳ h hUB2Υv1mQ?OIL`# 63H'wLcp6 _Lm>AR"x[HfܫDίNI󷶂2 \:emVgTk,(m̮iz o|ҶÕS ڴ(V]s73Tah4wZ yKw>,% 3@ 9k|M3}1gStN4?u}~ZM) ?& 5uhEKȭ)@-lNޟ-׻*!)~%ڸO4mqG ;%+/4Gr.K9 HcFELbUoiCWVu~Ĩ5,I"3 -D8WC,oc {'W77gA{j+^_G{9@?6{{-<7oͶY*&6MIۈ^s{!hu ToՕ ~~'E`UӲ}fƿ_њvQY_|{:t?t$  hI 1 j%1"xXb<} ך["-s˜mLVZ"'I-rR"'I-rR+W%ZW*JEP"'I-rRTskD岮Nh岮\֕˺rYW.e]]-\*5W#z5WE+h%䢕\VrJ.ZE+|2̩C:n+8gL2\ZR"_+\:sYϯ}=VJC[]VJC[ih+ m4VJC[ih_,-1uUtP_iti[.hB.7O|^iPg_wcٞ'lñl);;K;֊X֓B ȤqZQ<$\p&92#Y %m=rfA{2FJ&sRub`:TV(c8{. M`BسAJtLB9CzqlbXE4K2>vTI]?? \ri}sFEdDΊ "ۘPbԑ-d(='! *ikt\|Icrny3Ȼ,uDAވ0FɌ`0&C:k87ɓ"$-UzԽc(Y9HYK+F_B )(d8fZDg$0.cĘ-W^T/dUڟkiDA FD*G)1ĠH3! YUni!s|NZ "GnpZ9HU#=5HqO(uG,!»ZEVӷ'Ӥ"T J DcOxWˌB*-ݸʼq\9?&?,U.=r`{.8Izd}K ߲=Z8:gGKUj>>gC^{Cuճ*ΌX:.i#cEEI}9NQq̏xo4}۝FIhT0״pz$ߐ9:}ޔo??7oOOӊ -\rv!spbw/v57ZZؠk!|لoүW{W}~[0Œ*z\0>z=:;/۝tj"aаIϦ9qgMQѠbkBxW6b ;?h729mUL_A-xm@w%r-Qhr*DΘrHx6I%4$S5aWAsG6i}8ju~¶g,vWf,z蒹t_U^&Ib_}4F':If ˬRӍ17eg)L=vEU>)H:'>W2T@Y!vV;1\b[=rw 7^6.#tq;+d˴m y)|9 -9[SUO`D|u* S I:r) %,Н(b,5&ey, ŘH VόRdc58:ċўxӊwOMy 2]][odss##5K %8lVrds.0l4s({twdzi[$VcZ nnx7=m7/oS ,x"=h^JݣPuQt1nB| 5hda[QSQy.Tc՘gC56 2KSJ!2Y,wdA-c2f_]L_oeV hZH>x3&<߀W61!9A7H_2@iҰ.õPi QJSB>aIp;q*~ВMg|$&tnstv7]К{m+X:iӒ{TOKG=-W(d9Εe"}!:̣&hAs"J35 GVn~-s\c&VR" c 9Q[@Or Fr/7"*@ө&`ƉrkҀ0l%x)&/c@JbTZrƃfcEl85h އzY:*o)ŹAH%)y c)&KJyF%Y& [rjvۂښcYi1K.$h}6g&E͵ud geWv&Oc[-ܩ-QbrY< *Ӌ揯8+{Ooh`0<'_Έ+$A$W` `<8!(#G1dZXM>QֶSl)lqR p \L-$4'i I 0(vbp68w!ѫdjkLJjevQTX۵.ꐄrB\}KKec0B(.W /3x=!<uX Ҍ%dJFѧRʼn`g>Fm "9v 9i gRțS)ae*t6 ghǫ=- XTE$:!H)Vs&M<&59@|]d4k bH>do2K.O *J#O~edI+uLG rlx벑c11 mޕ8r$B%;c=hG2 kX> yvsCHNFwGVţ"YKhd13*3"2"2 YbR>B6xјI&r^[hwJ*KmZO##px{1E`2l"c lږrFm'c$cP4|,3 B~Wn5oy>ԅ ?ȸyO[2隣wSRcg6(KřARL4%y[x5(΅H@6Z#UV$!uaQB2 IfBc}:Z3ݲZ}*wqX`~N8+!MuI^HS<ñc1 ~"Pc !}5Ɖ\®8Q'*@ 1#PqHQ1|7TL%c %i| V1nox/"HT_ʏ\iUAnc`A*b(JJ:AUm.bYEtj ||ngգ%KC\y+?An.:͞|[1C~_u d G &xñ;\48SY,U…b ^N#/l)N1A1@QUY9Y5r1 JK,vn5+-}%`t[?Z;Ôh5Y9U"q" 6! J ( :5D[dxQ} :z7&G4w`h|0B;+YAO.\"ͼ8Kx6e YiW }>])t~.Oqt(?gyxB m>%(&d OKiQIT:P6'5~vcq1߀À2G_qs4YO!7:J8~8M /W|UМ/?}OrL%hK׷ v9L+lۘk<[l#1l>_0wӊ޻{nDq/m/+1}A(S)̑/4e[jr66#9[Wdx LqY̘C:,z+ʪUyQFPT1q⾢̖:r|5+w?iz1=~7տ]ߔ*3V=䕹VGP\7]K %ZŮ4&bOa:[7;LO7)g% Eb͕xU-%r+. ,Qԥ%WHcYˡ`0f%=2J)XF 0}RTDc2c6Cp3  w؂M&h 1wo%Il39mز]E`=eWlЯnBDb7!y\촗;kaE>Ӓ)E 0ó9֋]vsc̱l:~5sO |g+0|=SjgJtH3M3gnxɞl_悥e}=FhEc Z[ь+љ4M@ |`ZboZ߁l} 574H@Z f+4aж"k_xsP%]lQݳ@kBqkKXVts>2;0ѦrV09{{UG*Fpڞx|.JuĘZ~to,aǵ2ay-:T(٥ _2Bt3nn2+2-S0J]b%K, a~-XZ/+Quw/D0h'1a2tp·7p<ɷ]ѩ>` PQ7,9@RH[UY_$_gɃ0A4~!7P.Ez;}E93*rE\= J2mP `_,>.Ԃ>+xc?G([xLI.%땊>~ҷ7$5GH%fL#3Faڟt8+W !A1`>z<0~Љѧ8j p񃬒c?94Cy4TךPI%?ަϘ'qGO>@yWU_X`&E<JH^R(;P3Ӊ4/}XGQuz 0[G;.1z R$ϫz-7HoDyw]UڣQ m LmRn(]e_]t;R톇CɽJimLֶ{L,c05%܁ɣ+&9NxQ >^I/Ѡԝͥ1o2hb F[R_q1NwRylY,jdG!)̅uxQW\ ]6 *v!U'*5 !f`k.PHx3jg>LLj9+ITk%i=@Q#̆u(*Y.n r@I38}>as]bo~~5xGgV9n M}C-oH#֓n.>qÂUj6K{5zӨ<## ^T[N (kZ4p< jFULA2$)&(\A o VH{Ѳ zD G`yB T<0IpJp2RKQ,*,7JqTQ&ZGEdQ0A[R.k%!RHDc۰ܭ 4t/:ŏ7d) ԗc2z.%o:.,U31eb^)9NNe'zS‹IλBC !vdyMD҃Oɳ_Z` E:mcqorCvYnjf@iE:^QDW68qA,PEkL|d( ZErSGF0@uK& xcQc*4 !0 k0,zg՘IDk1P MVHKDiXP^PiNj_*[6Tn;z xћm2۵N 7pKCuޡ@e'+-CpG|L.&KDn$mj[ѤXVȀ?.0VxFzw׮Jk[vr#Wn~ٷmy03xoL{}[/VcgfAg5bP'I僚>\QaIYyqj/%oXRBF1O7e~WڸR\ܡ'O+#DCT>(3n.h,[6ئ}$b]c'k?`]ZM~;J j?M4,+(>– P)o<لT{ e+~jk [!@_emʤ`=Rț4jY"_XHu-;n؝ҙ,2cYQeᵞ˪$Oܰ jg?@g %F6Wb@k+6SS$UKCfCYQCRU:,T]l$@ bSj5MLa+\&K]GnFiul)%t+gKj crV dFc"(e$ǯ{ Q5Ahd1X+`[Ȭ"e6,kR٠Q,/J٨wݽ3pQ?/)qwlc[KDKĽDPbWd($i" ZY!umYІ6Ί$ dH*gB Sـ1%-HobWٱD O2u`8%_ggT\E{x{%PbUF &X(M-h^.>\yul+cyHGa9|a/jmIpg^ 03KV]/3J& p՜oq{`s%n 4`emJП1{,盳h6:yq˨1صQcr>0hF^6#imx[ Enz*o,/W;N;n2v;JO0m9S7no>Zn.2yi GPSNR~$3o8B*W ӒI@HA&F{]b-1t%& uuF`!JeY5sF! I28Hup RS[Xu3p6/tC5+ ɀĘxn6{lӎɹ&a˱\rɗ⑛$$ y^J:A.E{!ۂ(!JZ.%4;xU^|IwC[1-r3MQ6"QD2VXG*:k sDY:nAwA ˏ Aվ2~;,&p<7nT*4_ "zB(3wP4"Fg$ʔIT,F/,?=?մ #ӡ#+%`u{fǿ) AAo#c0Df$(s=#uH[:ֿURĺw)IgMJEGX䱚 VF8x`cUQwRD'ڼWޟ韅ix_k"y?o[]*5~ka]uQ{3Vhפ0M@}!Y}_㚿@]W? ppl(i^',Og']@ )keخy-m%6Gxt|T)( P_og6OCF4ho5RK}dWRASmMYELqEZvqTQdiȿ6LydUl 0>d[閸]qFg/'M-<VԺ+-#ڗQZ%w̴)Xhx5Gxt0ms[WpMڶW`bq d9ԅϟ.b}8JC<4??{(-LN6M 7,~|O?ֿ߽9_~͏Pأ߿=zqa L >[oҶyK gd][}ucvWn@R|Qa05y5h"?5഼5Bz|'+ 64%p0#t*jI4MG53^ȑv @VS6p3>U [帳A#[VBeTH,ƃ:][r&אa70y}EÎ,>i\b*1HX%r!Ú(΃,I@ RԂh>IPg';ӭ+زT5ۆvv9Mk}/+;x(圊tu'ߓ$3MK)C}S/S}Cc;?g-!-M~Nww^BEུ6=ssIM&ThXQ{砷)2t fHy޲.29Wcu짓['_.Y -27w1_$iM[0[[}dBoY]l5YڿZ 9My ڻk{73mlLZmٶHh%謖WԸs=ZZp7 vT_STk3 q4z@VedS<}{Yp F/Pdd #- 5FuD/)TäqT $dP385_O6R0J9(#âlK0 7+)η =: LYRuڍB%$j"ROwPB1IQ[g6}vkwQ ɛT|=_BKuƒQǙ)TX9)ɐ"C)(KB_w; BB˨9I%xvzz9bJQ* 6eCIdT2&!29I&-Sހ!iX:fQQ;ƍ |R+1x$kF&fr$S˖1&h5 UjI5EbPXs&[ 1A3W)S)֔d4Y|l.<tiB ܁u`18Rhyr>1_2Xc}DuƬaaSy`)Wچ@.,Z|Z2>k6 `媈r(EbBs*6,Y hzway_mѡ*]tvL;}mv*zZ_B0KR"6hJZ`P^2Fi ջCP@g0 iuf&dL0O.N1E"vvMbR%,|o;0P7|_~5Yr XH;bI%d- (.I2ktY`y`ֵO-R~nen8@N&JBN80) 3D6Ʃ d VQ'J[e@S8f %E5n2X >Y,(Eye̯(_u[.HP*Z!ȮLD)ԥĨ}wE]% Io 025I̼X\]Պ!mkTXQxJTY)N5nʰkQMk?fR3&bЄi aΏ6^=ňq2mŬ*$c HQIbYjӉs!u@=cЙU-xJ'.  YGS dPgP jV[/5XPqa=(9@i,A2QӲB JP`J秃Uoh=dh":t@2›q+*p?:,Xd P?`<5rU;͊yöMxYI ad`0 v?>K xѕP1@>rQV]k)d2bxV ]sԲ!*sXÎuz5xrI'#T6uW̨ n{kFV ZnkT?pӁ$e#s4⬭K;,$:h&+ s.}pPB@F:|*i5kN?H@v}xa,X#5Ѐ7T*[fy/z+d^u039b@E}׌I'y*#0> \n A`tAsUtj6_3mL.W94ft @][< 覫2l&ћ^FT@` Xeow kH}k sQS9% CjW>xoC{ofܵ5p7LJxKTM:39P.3dshkh+NJTt A(P`AH^GTi'O H@v X ׷52[66I#+dOO+bE]("\x>W+f)oQ.:03^+v*BF8F.U,JG52⓪`bg rj`-pY3C9޺ GY,JLT@$C5%i\4 Z`5]r5WhKȡ}xYk$}#(TqF PXǔpHJfP F̀zQC/F`J$nÈ'9>XmڤܳGGZ6]l!&\Ko!6'Ȧ/ 124+&) 0"tŒ\N!J#:KcCרuEbީEF8Ø<0Հlj<뽸qQHpbr@M1CD-ڎ$7w?Ay}G mz|قVK|,]o[F;2+EUYUk=z\;aw}:h\s^oي--?+_L8/p})KIs_jQJAK߂-9A+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" Hp+;Hov1Ӡڡ-;:ܠHvn\L7~G\>ȮG />8qrG~{k9 g323p, LZC-kcضcVx9DF=wl$-NHϨAvn{nmߪz&ȇ}S骤e\sxcs!/gi5׷xTBJ76ь1\UVa?}țs'îһ8RC/$xK iy9rt􋑣3RXrt%$G'9:INrt$G'9:INrt$G'9:INrt$G'9:INrt$G'9:INrt$G'9:INrtO*Bc}a>l3EKr,bK238)+DY%-wE,W{Yŏ'hr|! Ok;[Lž~]cϠARm Љ Y[UA4SOk\Y) ejD8w4"F(ZƐѯE9l,oXa<^\n7ϳS ӭO@cy{;x)^3#Z"B񞁝𘢶 3Mp@KmaΖt=3B˛aLH z}.Ƚ٪(rt]Ex' ]V-6cbE1G3tٚQ{V@zxNlcOftDuY.gvDcX0RT&Up.T|%UF!',gRdeRUQʧOh\A̋ H[!$Q4g'2OzHmܿ$ JA g?n1׬r#{L\}nmy/6aYm_/Ӥ7Ĝ= Խ@Qof7wGR$&OO_ E\~ilYw|G]Cs -Ɵ6 -Ƕ1|߅soNlF{uZwDưG'k>ml7O)JcNoKݺ::N^Нiu.㞽qsS'R׸qOҸSn&o3ҩLɧ،nƻjP g r}- 1.oPE(Jmo7nk?}ӝ?Mc60eXbqψ?k}FK#k-1n4Yo ϛisVOSfw!lmpsnn`j=LjoY@M*;g.jGv]4׿]x} ҿMEZOד5RJ[KjMU崦Dv/5eϾ5jM- εy{d9.~*C)1:gqd1/٫TckhzsriUoc_V^7qȫWY*W9DԽQ4iR@'Ic?is% rXPR~[c=cn{-AoW yڐwO|IjX)I*X֮jSd ʏi%i]&f-k DŽe!Nobod7F<@%!!!!!!!!!!!!!!!!!!!!!!!ybcw7L~;2 F/pߩZNF#&EkrQ|[P|1(g( w+Pqdۥ(Z -Ϋ>z*y`'瀽 eQ'׋Ǹ^za?l)dJkcIM4BxUM -A=*9p[8w og礠SOVwh yͰ>#7"e'Q4$>R+Cw?7t/GʚA[ei0 C0r]uQ%NeitT{ţX1dEa͂8aV#|G`ʊx̠叿1XvB9L;7hz/W|g͎j]4t Kӈ>=*1Xy)&+jk/YFK?]!/[2IuWa67Aj/{KS^^mM׷99E[M6ִlj=R\ܢVy=)$lM@9(B)&齦!t 6!f[2HV,h2*la-ml^m\"|5"FŚ͎oa2 Ynns4ypl6IԬAz(QIHY\3TXT4,`s= &W: w٨3v̄MªAҧYoh\cfǎVVimtH YJRs kygL VַRiT Ȑ ̎I #(||iC@Ssa1qکqta<"vf'"¤BkOPo*. <Mi7 0QVE,T"gB ^8:bdW E.>QĤ!G'ʘ:u4u\"=1S<26(}cfǮPb7{6_4^#cOc`WGqy}ӶGXY8 7(xӰZcUjXջaQ+%DΆa BfAVJ_b*dyH@Hnߥ*3FBڣTfO#_kXӨu]@ Ϸ5goC@h R`@ʻWߡ@ן܋l{oΝS^~ U?5(|-Kqo>+\uŰپXt|]=)zs{]݇߷\~x[>񭶧ՌFTB&*krDJaA0.yyx IByxlsdy?<OŮp:g&xy@/G\`1e}3`F0U0>MzoF t~oO[T@/KUfݢSu﷟0›z;5\ D .h_q ĉx{HV={.pڃ%Krr('n J Oz?F.s0B7tۻt_׳5V뻣棊KaK1$ltt9kѣ1p*։sd$p٘{7ODVݯ+?7YfK0sa.0gӱnl6zI}&|pi '.Z{ryOmݰnd-Nw1 G}QU|4z|g?Yp^gk۳JJ4:4ӑiFW(lM_9ǣFAW4`N}X8 ssox?~߼}2_|x7 F`, '"' <7?oѵ)57A׌|f7W|~|o|ɭ?3ogA5߭#'װ&+ 6\?ZJBMpkb{<H;>ϻ2X߶'Ϯɱ_x~/Ir sd$b5/WmH.$tNHg:r絷ܻ 3Wcq8rF7[),e)l1̃"8* C":4<RsbG "2`'Zh Xi1r -nIpz_}ջai4zի~]PU91vXݥC˕>רĆJʳI.TȕuTBYdȤ9e]BS*\*r|ʥbTCs3Nx'l6J k4D O I:S@U9n VV>K@ho[1c0Ɨ{SJ v֓<ʥEFzMyfӈZ4\1]8v@؎˖c;!=lOou8Q̫NW?~F\i^-(b2Iޓd#<@$BXgRZ&fZ|O [SnX$/&I^j<凅)iUpkx[CN¸:cY0hL)I00\AU 늜}89M!/N5%dQS  {QcPY  a`VQ`a2PNp3(E8G:k{;8[Ջ19lf7m_ 6x9ݷlv;zzyѬzsF)by0y֙ek=\;<'dXXKفg/"A2PQuZ_L돒hVТEg7%uBmJ{v}^@ ! Fqp#{nƝTOJB(#>K99l9pEe9`3ۛ]2@{ tߣ1E`,4y4]OiZ0>.LFW 9ܜPSdh%{ҰOo3j&(G'7hCR>&}Lv _:=^Gv,]#}Wf]%/5y貥]Mf76[skz7A?>GlЏw3tA5TdtyY]嶆5.&7:F6~ւL-ڦok;?nkrJcmQ2y{&Ȟfo^"LG(%mP73x=[[~,t8C[T7 `5ʍ1 |D[#(iA03By e*i1=и!+$j\~:xMߓ&USc ͍{ pS2@][McpOLyČ&е\úNmG(s1}{uᓍ> 3M/TzC\NzS9PGV>HW'Bfs$GI$|Vqb RC}ȠՄHl^Y"1B(5Qn۹YuTdGƎdGN%Ȏ £UGL4jOX}X&b4xybs2`+!\$Xʇ*6w;of(ot(Y"ks]`URT4™S*Qz˳"R {:b$e1(#XM30B^`)JrL"i]2It)2Zg=T1^GZI}|͉V0431!+:@DiLT)³WBkuܵ[> Ǥ)_:i}r^R&%2rDļ #IJjpR9QѮ]O>d`B"Nj5  ^ 'ٹlY.HA^+ P?;ձXW#L`DbS!͔ZhEYD:L\Tbr tt*M˓tÄ˾m ڗouہԚ7sx pjG?jj`ݩGuVbW0'.5KoRNcT| yF &MFik:։ ^9to[.)5)p"U:p\?o t:#`KZyo۔@\/;$>|jzk+SBG4M>\4o`M>?=}݇۫?G7ÖYyϸ \ݹSvF5O0@EL^ݻ^y"Sz3Nf7X;w gzq8@h% ЪMU$i0Bq=1c#gƖGw~Z:~a18IE%cXǘ$eJAPf54Mu^pj@\J8.pR$YJDlTewm$G_dԏ:{zYSU߯zH/SM 13驪*ɉis>[s 3%t\clw",1%Fy06l ^3]z9af(6{Q[|ty &#GEl.`^Pƙ)n^uII)ԫ9ۄCCi6~oy(>g߿}z h^O"k@hz&I$/pѣolZt;s? :9oq 47KѰY5;[?3 ƭ=[ oG6?Q }(7 |vZ4VwѼ7ˣj . UQB]g Ec5ޱEƩ,5XjD`=Ut0ȡ@gNj=wI*U ^bR/~\o`^i &LDbb2/x\\%8iJFr8<$2=bFXp\ ~rڒpq>r :wg/ ^.d^h;3y\(czrC{gk4R <ԓM?X JP\IP{0#6_FV+Kqn=4/7hkM'6DB-ME Θ esDuܽw;-2?7W:ut] ]Hutz Ҋ0u!] ]Hut] ]Hut] ]Hurofi'Րntv2dT#l$b7+=mL6VaCPTsTi6~z^JYvIU?WS)ܜVN{#My&( )ԚO5nR]дqj/;_OGrrv',v\+<]^1`'rv¹85 93P#E'i娑B_hs]>$&lƟVsBQ/⽲x>/lXRM.]WӏW3F4ߟ">7>]E_Voڔ zv.N~ٟe[ֵVJӦv9_f;isђ*w^. " . " . " . ;ċk^L-c{LW7XS%;؞yd^fgqmhs@ּK\c5IIO\Wi;P{ ?OVއ#hKOʘZMUI'3cn~e0Z/6>x=FɼفocWdʕoغ[a)d$1gbj\QjC;~kM~)~p RLmG`m.ےhK?- +xb?CxQk=%V&hT9pIC)jBZzqu>"?dp^7M^難SHްmmBp./527Cv"XUQE@aH&d|Is][y#8TŸZ#r1"VbٮrԠxD#("/;ɢTyEP9*Cb}H1DᜢrBXeA 9D@j4,U !9G8 8ehz^ q,tJ`U JnuMhJ[bnabdk Ma[h:[xP[w7luړ; ~vԶ~Ɠ/b'49W^SG&V^8DK&HO"$0ְTˮFh!=1βRy NBPmc"^ҫb ,PcŤcOIYIg;@Y#UAνM)!@9e9P $#4׈(r`LyF8R@:A 7"d MDBBƘK">.7fn&?bR5Td3//?|X'ӹE(R=*p&T"2(P2<JL$.}b"FчL"q&e#*G9Kv%m4XOQY;0eꭋYhK65)TzEo: 7 2gin]a|m~YnWZ2 )Ԑ<hR  F';z JU|.JPq;-x(B||5+3:[h.P፧:qq,J VH OYN5c+8…gY^X<9`ȹ8Gmuk$t^&%/yN:Dk`2+)3Ujg&{*$^ɨ,U*W>P^&`"]I ȑ{$[S k e'Sk^pe'(SO'Q*ie9iI>p GEQ)sP)c$pP8p6r[8QȄQ8 MP:Xwv\[#r'T\Xw=&vӻ%^Ćn!9ٚײtG{JFhqIF`,d{2R !PN<#y/Jz۪Nべ=Kvϧ ,̔ŚLݓRzrv8g6eu|c}KTl~~vsK~?5>[J@'8ωr3Տ}W]A7>VS?NjqmZ's ʿ 4d3?KD 9Tsߝ'hOcsZh*y9Pȫt\]\꾻ǭ/"KPELy>Ceq>6ީ񮺱ih^۫_5V;=Ts8ʽ<'&gh iđ/8O\役{sLӫZ?홷͉o'WيpygA.ڱ~l#`8v8_u-qX-]koFEayJ`'o!=?r_ozccf;r?kޗpN#mT붤Ho\-^aSC[ s;4 / Axhn)͖F~&>#fqd5"$Q%$4)Kٯm$Db录$P :LǑ>afF{}#O lH\|H>8#ib@%cfFd`8" )9$x5"k:89$jdbϭD Ns|ߕ,'9bC爝 y Bqwg/8=ߔMRMQ(6Ԍ5d }.P6J)6ZSNSa ."2w8(˸pϾ۪c|guotv1ή:ArO@ޝm{߽?=f6hj-+lv+0Wc+0o_l|";}_dmܞpۘh5уĴĴz)DXpC`^n,ߠ%z]vD3?j_B0B4BW]Pw4Sv%!9:9:9:Y+:Knl9tXDƬg@~BV`4))EbRiØx5iS*@n̰?ۅWz=ɧ(hsT(B2)%Jdꦯ4Xy:Hm2xW0v>M_0 y;i0k1wX՘GsVÜW^_;x vFCr/%:#RIAc`7#?c;.q<'M`#_ J(A{VN1#$mLT0 h+Eqɒ' s+q OzDI9[+XP=d$1kvˋmcb 9_!'a5dNPQ׌rq5o}6rFsO.G(8lE;(_\h2Gx4z161vg&wwzmݖ?4-i9Pr`k\SsQPTQ z ^%ҮjMj8yt7 Ȑ6uguq:dm,s+ـ #+nguy3t>=/ȳ:`+QwOZYvEK{շ,`W4&nlqy-?~^_}!yƲ~~ɫ__Y$H/'pUC #n Z;+ F Jɩ"j_^u &`800dr!pUUpl,2 TUgbN49u0 )L8 ]o~]0wFob{3 fڌ$$iP6)d'R%f˃[t̮z~1),b(l&iͿˁ{݃vUe.MosFn9xFRؿw͜ajg|ӽfk-^: kv{U*nrr6oviJ߰?f}>}C7&W]B1>u5m+׻R9-_j6,`{V3zlHAܯ԰ʞIS OB՝r.A]ϭ0aUEv 8]?#RF4œzB:JiuMevqo}LobL>8UY%U*ݭ:_6[Ii8 Cb~׉w"|[ uȔgthCKQ.pnISL[a:y[6U}hOLjI%D cԣE%N&5Q^,_3ߗ}WW¬wwkaޑh rd ]kF+GkRNoЕV}ff~õą t*0СAԹe'!ѢVjJ;ܥGES#mҍl^d%KIS*Zra [R@ ( it*,-bER)Q9(D*K HQ*Svb!Wm*] #twtp u#qKآS˧E(:mھ z(?$SF|Im%}+VJY MGJSΡNPI>D7M9|G⛛%ro(S;Iؓ^ot0iEa"ƶaRi}G.ܡB89B쳐VQkAR)j#PyN5:8׏G#AOvH[PlO[dcvJah)6jdOfSͩlh^ᴩ74i߇n.*pw^unEC2̮z_R:Gj~Q^ugߣ~O�!BL׀9@V55Z=L(oh&'W5\s2pj!W5J)&zpF)U:%'W5\'O8ZP4vQj5WrG/ o9:\=+`ͮճHp(idp%'уr U بS+@!;\(-ւW FfJ̞OÿsUsEUp<;}hI{HYْ:,QvȦT>ew<]?4vR"CB*5$O̔0Z2)+%Du69K=+]Z\EwU^Ҡ2V4RG 7;-dדǡbqR;FtK S2E$)}D .@`V JFLd+C[OYSV` ) fELUSQ&H Vpvm cI! J'0L6 g|*%a:L`$d0 `d"m}u3 p:P\T O){!f yݧVb6䓂}Kbݵ*MQir0Ȋy`c*9e 6ʂX&5B"pcEr6 'G-T\dKAj3B_Z#cpFv\6bknƒ;z\x@xfanvm/__^o./?8b9i ^E ͮ، 5`^b2'RbHRY*5V  5I T¦&%@LFd vº"et\D4GpF0s,ݬwJE}-4&~+ 9DBe0&JE_JrgXTނѨbl` 3dfEG cM"jN(mLy >BIDN16 iSMՏNlV~gJ}}5Gg31vfR7(,^#HO,?v |X*S!$Ba䅐nt 0BȽbi_iSV2Jd@`)IL#Ya&$PPfl5؃m9"ǪzD/y>E7Y=;|[zC%&(ȤR|)kCY&yɯjnb}˺.,j NY.C^A뀷kޏZ}vڽ=orR{'?G&@)w:B 9U"E}Y嶛?Kٶ{ CH@Eq2GRTP{ڑ9iUZo g%.YOsn|ooNs^/I*{m"Ⱦ]݂mMb&z$$]shuPRi楤UE7Ϧ:  äNw]R S^,3 e3"BF ٻ6dWyI~1Dq6݃8>k}HYCYP ECGe.Q쐤(XR)F &yc.`;EHXW&0!S&aJ;fG)µ}zq]:#gˋ6=";w3(B͵ -r=kڇX,;ig49 l:.iEFȒ2]6,{/< 1"{L@|Z8Q=to[1 >p~iۆFf:7.L4:&(c**Rc  B' L<|q=VKJH7C9֭",qa,wH<"AN紤 a&8F {.6gganzniAZtGX%Q?l9ˣQV:$0 ԭ ĭs`BHp@h `;*ʭf8EK3|Nr:]$k>jqe\U0]YFgF```bzrQ9Ohx0?~}7y'`u€]ٛ5{.0eM|dwQd?؂9B'gRP;1E.< #Ϛ@0!0LH骸JrwnC.>qPV1fk3+t[N .]veJ%R(_wJ-pQNtjC6LYŀlj )z=wœY<#߫'?5PX1pGójm_۹Ñl<v3-w1Fu$׏t6 iOhYހ0 V0b']tnͣ6lm#If?42>?M]$fIm &1\|Cxmp\=kPG_??| +0 6V$A;C񚡩b[ MH|qESnwU|qKcvf[n)@~8,\FkbtrkVw?!]+@6p(i* ܥ/U"D1#x}[LtxEb}&e#H6Q=!( Nsm-X01&3y-+m>lhT^^Jca+t@r3rVk/4+F%KUղ|I\gD_!\f-ņ,ݨ@ygSh-(uJ{$ \jA]wv|1܎[ DSlfWL_{kӅƒrFCٖ{klUGfdJ;Zx)?9sY04\`D$+h{ؚ;f)e{fGb&I "L TH*x UQQB#5"DϹsBёTGJF0s%Q~kmF1[8MQjg;X+\$JCƮQg< 1!Zqyy5}"C{n>qirh5`o=yCA 9>'cKQq%~p͓R2>dJnRIe߼?܄k*Ś;Q$Go1/umy +4 0wcw!R+*14#7dIQE"k zDPU6 nLWkUQ҄V3%4$x "y ةtց|TrWWQ-Jze|t*[ZOOs0@_"v>ĶYVD~fJ@;K vz}ۓu~3'g'oUmQ֖D+Im5bAOɞ+O3>F>he<)i31W\"Ps%.G[2Ž'qN|0^8A/c{HV(QDj6xP0R4|@HJ`YJQ!(8.p\jx$cs3r6gph4-ƒ  GExJ!҄0I@c#eTkBeاGF+8h]UrL 3c%&=EySuΜ krh.K2\*밊 XERx"|lH%1QMw7T,WǚKߍtP#8bE`2l"c 0m F^ƎHґeJ#g/^."q" 6! J ( :eSu*Udí W<(UϦyB&gṚe^;%8fiC]&0` aE>y0(ĽD$n(RZ^(3k/np(bANצ8u2(GGt4vtP0ɜ)iI780MG)6=]t7PTbޡٸx6iWÏ6Xoo Ls;7- Tzgtmv[`AQ*.i=2wjLzGN,s~"fדJU$`s_%>|qmPl:5 P, YSj1[ma6Jߟtr^E>: ĬJ#PaNc U[H1&ZyAcn^@"Lo~c;㍅nQ|J|UY1(:DXWؒaPJyVϓϔAaW TY1.TU$&d׾'=A,_¯cS$ ϻA38c9XeOݵCgzZ!;z~\ r'&@\s&^u$GAr!b(8T8R#*$:*۴_j2{gSFW!8H⠄S:"@xNSEYKٗݧ/h|ACS$o"Oy<}*)l|9|7+[Ws|y &(uĻϻMV{yСNO3yv=c\l)eԋLb+me;Dv`wqI_\rJ~n,``1duӚhFdkWMrf8peDfݷ\ۺK}?L~jb w।Ssc%b@9NxݝEۙq zp逰aKnw͡B' aNc~ M ݽgv>1Dzrq`KY'cLK21pKϞAC˹t.6nxR;z"cd=WȾh]?<,$|8<8!˸shj@[׫9>Y0B%qDYLFf: O 9TCUGs "wϴ!IKg)bH'tF 6VQi Td}d!XD-|G#F; {iE3"8wLp00B'v!e!x0k5f,`2b@ Ў\!--90r~:mwߝb~S|* V|y .o[/{a_ܳ}h%jg_A=QpѳE!ӎMw[ wB+HK}P ]L>pYYmu鵱5l躷mbw5ml-\+[R,oyw67t"Ja=e=[3j]5_?O3xe+~>8Y!vo#v#!6؜sQ O 2֗$V'$L!I$.Zql}>6J9?}W炚.;^N /?h>[E=&GaLZ4=R H: F074rESiP4D"I˨Qn>giCx͟@Cucdll!ÌSnsj=h,UiCA{푴TC:zPwJ:tg+dtN>d:Y `I3 G!6֜&Np%K-E!vBIb[ TV`3#JRBJBzt ]iRf3w~3R+z`$}%?ApY_hJ9ʁ_ M+D=X7tRuJ(0gJ#Au 3zCW ]!XuJ6|}>tzv0B=9]p\ċC˟h8c# tu߮ǘ,>:Ho.F/tRuJ(9IGtEBH\"BWH:]% t#J ]%7th5wP!|t%^tp77MŁJ>wOQR(kyԨ8[<_߯AF#J%JuuƮ}׿>W$mlEK U۽lb=jᏛbgwpѝ:}W1?JZݠiy@$[S3A\^*Ғ-hoEϯ3NNF% ")}Ie (cs4Om&\"F_ '3b'J۰ryJ6m햩eiڽhifm/K-c#M{~]w\%wUK(5\q=+,1 ]%ZNW T]@> JP&\ˮUBٵ]}1vl%1тFQ4`pHSIl?J򟚚ZZ4e0~)&}}.%]PAKMJf|ѫ[0UyBWd"]xc+91v2l{Y }\,UE`rI.?Ouf(޷Mtre<_lf~ M\% 槷S cs˞ks9ΠR_+ܻA+C &xñ;\58EaQkI+5a ' 5ZeŚs/QS1:@{* DkY*1`"{&CD4?X{))`nP7&xGI4 GZ&'v` Ϝnwԥ9x|Nf $koFw;*5n؇{$2NAF"AAitAQpAG\aQsMT?ܚ ixfŹzynT621y) uJ@\B4ȊF$%Iq+(8N9zo`I_\+>B#X~I,Em/o7k`m8(nlْ' 7|>3qb,'re\xJl 6)93XoaP( ljhx/X}fɼ_§1凳3\\8?MT5}^SPtޔ4JT\]@t)+Cnqo_1vhj|*yYnx)oE퐪2tS;dUYO݁ EkKb|2Wӥ\l MǙO3?h~^ea[ILnA^ )/Z GN,@?hcfcMrˑ3ߟxB  [FW"KBV`U8i6Ha L+y2(z~C-}6k rC_Y`a''x,U׬g*xCĬJ#PaNc`Vhb@!~T޲_y<7ym%z,~"+ cEwVێk\5 gI |zZ~)fZhjL&żkهQQ,dz${$7IY8vAe慉ӒS紞zv< ZL4yG~iWnLf,nbtm&caQbT,޶<=c"M ɐ3 Dy+oM"fӆYky0!H3ɴ31SL" ` "ӏu|e1b[Y|D>X?e5vr!J`XjB8SJ =j7>-.!$Ċh*e;Q!t N&x[vU[Uut2CvY|fj3 AM=tBUÌrJzwq7[ŧ[+1-Q(Mc#ͱ2*c9ZXhHaĄ O^c*5(K)i.\CkY˶uAd)[ Hm0)m+_b9SO&/X;Q\ Kz߬/8…lIMjyzlXp, p* X5Sl(o"j׀vU(FF.٘{<kފ"H6i 1I( ؖrF`ctuSL_6N5y[ipd0DӁ G\vʒ:Oo_ȸ]o#7W8` 8fs6N  "iF#9P+P:{HS<0HY;`KxcY-xNY-"[sm-d{ wo?jKѣnWpɱCG/K*QRxKed ;x%-,h);t$1i}b| -JKOC"QAkR)!p 3^XfػLބ+- 1{zOS#>GO?Mݥ o怅s3+Li@e:~VD6ToV٤pv/k{UvB̚m1wP怒>hތӪ;U{}zZm:nmhI&4.ښi&f:mHHgUc.o{nyʤru3b¬r8mlGvm5;_\#iKWnm[vybiݯL?3r-:r|SoZt/WHlsV}3tܽw{'/37wVYwxx/WxH%~hf"JiLFf.1&$,6(hZU"2U࿌pi_Jb B !hj]~p\9; ]FWvvmJ;H2xHE"M_5Fɠ)eh;8sS QG}QۺV;S~oWPVൡu_+]˓m^iv/ {,%Y)khKXrncJ#zPu/$i(~q1x+(vެy)~Cfؽ" j$Jux- T['+HZ$~ 8+#^]Od7n|xT@(t# xhXA# DӜ0湜0湝0ٜ06ѬŤYzrT 1Bg Ȝ* Kuib=npyh].i7c{Do'S{_:}q}'S`=glp~_*>|eo@gxñםt<9,yBY44X`^A\P&A{3eI0I DɊϵw>o0'偾`Ag jI=g|.rDmI);@`$q#4u3N04ä巘@  # 1akU5]ZOOօH>K?Yd0ף0ɏmW_>:V{rn0Ȓy 1 p%p`#,S-]+1XnYrA'EL.S97)j'>mUgeUj=P[*B£QMAT'ߍq& 4Ɠ/bgD"MZe E!r%Ye*]25C!{> &CR)A AC4E1)34jDa8KTvqՖՖL)GY)&7頔s{2 LeV"5:!֊>+m<8C&dȁX4dP 8>Ĭ"C"ڲժ~2Z pJP+[DY"nANA&xpBbpZ*#h-4 XM(k[":' р΄ЖĤ ,l+[jceW.K]]ZuHBAdV@br>N0-R2 @cӝ]<]<{Xwj=LXEW En kF?=cdL$bGga^ƀFkl.5z *jhbM!CLEa&*}Ks@H'N-;R Au9@-r>Ao")*ˌ׊%#AƔ} ךۺ>.y%v6(!+/d ֪ <~O6k-ȉLb*8}L Ddrt^Б@vr%UJ3.a3T eN )arCwDuu?qmU X}тΓ-OFwAߌ/AQu (gsl hX(# ֧;z$*N;9*N:F0#FPNlFMJdH*3yRIJ&5c:c 톺? ٶ{x!rM`RE喝T D,E-sşZٍngv4sh< ˧g%Yhp:\9E6V ִȮB2 UPNxu>ʠg3{rB4:ϛLM!/ \A1!ӧ Vaա;:sD&#D!3ɑRpH6LYše0˺L e )q!X`@TI$34hѱX;ĨZu\YXL2_jr M)lt6O/{}[rSS^˪ M:[n 8+c,Vy(ud)g*Z:1rPzq4Z _t`0m%3l,'2YpnhD$O!iY੊z*ITscۡ M4Rj5XKֽv M@\9H1F :#qd$l+]?`۟kjg qCYE%>;̉`u4T d"`@H LȺTw wQ'MYRd&( Gz*8)6lJ/[%{fq /}u|1ǛޟӤR2.S5yoy)Mǿ]A7!f~r` P\hG_nq ~=Hr]|>㢐gY ݏ &ήمRrg~Eec=O-tz?޵oH{Eھ5_k޼5RI^IYp= ZURţ/74O\ݻ0<sofWٺ>=]{⛛e`2pQ׫A_]\NV]r76![~ئ$l, >n*Fl.Fm~mLakK+Y|8z|_ǫɶז*RMϪ83bQx|ufz+f㜟pnJ|M5Lov5%u?9z?}m\ow}C3& KٍUOߟTO(*ʛ--Q5zk>rsq_Zm^᥁'bY %c,X$kq_fػHjUUaKC|h߻̞uh@0/zU;Ͳׇgw3i/Ԑ_|t|v41hG{p1FJs1p]JEB<{N浣gJ>Tѱ3.NclxP|z6 oܧ0pɥϥL&=yIaQ Eq 줐}m=mU'>,+c~Ŏy'Ѳbaմl@vGy&r>ss=s-+u]B9cԴ6Րft%ZhQi|B7*,T3.f+GWB[ Ǩk)\;Av@kǂݴѕ۱ݩgu5 j0C-D]IW-zYW?UW:ͮu3] -u%1फѕcTC+ ]hԡFͤ#ԕ7+ӎ\+Z?PFtuc |'a[ѕF3v]Ri0xB5Ў"ѕ0v] bIWG+ʸ7sE3mHDY&=r;Euݦ]޷.+4徾 MwHt<;Ϸ<ܥ K&<)17>obPo/CB/_/+|=kkuV:K݉ثSnSrYK6D)%k٩dQ0ͺ8y)8︋ް2R6R,wvׅgZ͜*F %~JcycW\_ :^' #%aTw5+<>R@ jhud9"1`摂+Kk ]CS C3S qZVzS B9g>ʩu-ݸG$ nԊ ¤#UԈZZl3gݐIZJ(]}9;?U:9`pZVWhuy%l;"?@W~c^UlHW67+5] 3cוP3ue {ےTlFWk] cוP0u% 5+]k۟PS(']fnHWvJ{Jh0J7]زt% AVt%J{uPW-]vt%] 2LsWG+dxϓ4)OC^uۓ~mio>8vM hZhisvBԤ#t VM7VJpkEWBOוPN 8RW !]vp7f+n)tv,pwl:a =zac]IW-zoHWѕz׊@?]WBĤѕ6+kU+GWBh:;g%? p͡WוP򤫣ԕxa]7WP7חWR _j95/+Eey˓eY˓o:}wԣ3!5#Ǿe;bϗ'qNw~a>%axsѧdbπ\U[n;@f|7{/!'ߜ|')Iyƫ+ɭ3??E'_֣EmSU]|.o]mq?Nf}Ba]ߜW|VȏRMIo8i; *dT-9@Q>8޴YpuϬ' Aٟf{[(1CЛu%5+4,yBhg.QW18[zqVt%g-O70]>?=a|ȇц Qn>2 ][:!] Jp#+Ǯ+qDY5+ѕR3ZƮ+PF']H)- 4+؊6uJܤ#+]hHWHq;\o[ 53v] 2Lz]9ǰs `OL Vt%z`~eStur76bxRΈΌ>N'PwO9ېoFӂk*5-'Mֻt`"׌7su%VM:B]E2 J]; 84+ tnE"sp]  eu90cRn1]Wk']=aR!] č]=[ QKgw~q!)S}B7M^˓|rU)䧳ſ uED ͺB-/UR?{߿jS:ۮA %^tqq].D7h?"q7b~{w}ʲ;Cr%TZo]uwnv7k !,?_wC^^ćo޻ojG۵gϨ%x5߾z;6PUTT. 7Ϙ{Gd˖v8Ί!<țYe(G=%+DO˟Oz7n@nԟί?ө[^awŚ >dW-u1v]$gUWugΎ{vY'KU^yw{_VH#M}u~ӝ/Z5G2PP7RyLtV*ޕH57yu¨Ys4t(nף?GӥtVy,)[RQ Ś¢g(;efcHÙ=R^TQ&XGevLU}jHs"dJK1he}ZS&k-(%ەˆjd{mSߥhSθ`c4"+TR3|`Y+O\vz\5T#$EuOP>P1{cFmFsvycltHSj}RR' |FkTfdruj-[G3Jh;h{oܶWEm2Cdt(*FO xB+ m*}|c*}Ϊ#CztѡB uezҾL7?ջ)׷)k"}vCֱC-`%v(fW-'g:wV-*kjד3%T{Ϭ9Uѕ J[6;Qw(' |*O樹Gju'T-P#8ut~B^o *mZ \a5t)QŒG|I!1SYՕj8zm\W<UhOi~1ۜm U| :#ɨ^|[_b -7% cvHT*.ȀV7B zGڞw)oG6*y~gVn L.LOXOFa -BvsXu{PTD塮\ėHiǒPѶz9*w`!^m *ձw(ٕ.˶ ` E,k*[gzRSzm 65 ؗwI k] (  ) #-=lEu5`TY 221l.cP6p.Jq|WmH!t*[c,j:75DBexufj0 5wd\car10$ (;ƞwDʺC?BMqk! +ku{m`&bEb ʾ&S%s#)T DgL j;,"`=z&{ N+ {S TdgJ!D]r|B <QK=`Ȇ0GaJPOÀˋѕ{]ioI+v]Rއc]401f_fȫZi+R>v/E٦nZFV/^da'tY(^YHI(e+%gD0[Q[jkIk Nh8Hڧ QV+$1۽@VQ=b+ZЇ|p.in"<38oܠ\`M-f"҈Yo"磌1͋BIJ$BmfUs76]^.a.>'&~7׮z|ou_ H mZIh|TxK Kq@ vTdQt4كҕjJYe ,c*RbmH^E * <@I&bZ5!}ڄLkptqƣ4Ӑ$>yT_V 5ȤuՅ* 7Xdu~Lt#+ѐZYьmC6YK1Z1pߩƪb%F/)~_ ?wq4ЧB,GC 4"(Qw`ҧ2HA/ @n@%|yr 9hϺ=P P*LER{p^ %Kc[ AK^ Bke!+g r߲=V>Bs"4(Y c~6 WHQ fWbe@AJQCFdU9\Jb,2JBȲI }p$, $cC:QbզX5Zl?vFHZF& h% J"-{Pjj5魊YNcA-Ak@ Qmy_6(a* aV-6m˻hܝG7v9. Ҵ>.糶\GIάfh[?Tqf{trhѓХE8I0{g'QEEԶZSQ5(U_$P5!7vNj~5l|˽Ҍ' 4  <%"`ж+9^nDVm~ni_eN(WR RA ,3R4Č(mz x"2`ʽk7Q2l ]@ ?v5V ⤩Mk/ @ۙ`:|W4F.LB)J r#XQbz1Tu-P'`ҕȪTvcsjo7i]s0}ȱ bL}3i^&@&ci@Zv!; psrw:-;ګ >Û LvgZupEi˲XڴǽvZS0`¦3 ^ 2).$prg,^sOZ(d 4 %׆*hI d~UpiXoCn>jEroDt0r( :flm)Ib}T\- &!Jk|A:oe }]w9зt0P@zDfz.nϗy4Z{u hO4#lͩ|Տ?nݏOwˮNy ;>|hDsXMK9'i4Ҍ-/'"_-@t͖Ǜ6@ï'zcq:y>_,D*,GFziZrL=i9]l?M8o/}v:ygE<>4L5M(zm9y(Pr2aH'f"tʼTS#UW1)ߋ0%M0Qڠ^ 6@OQ D9d%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V*b< bnp{3%kX@d"+ @Hn͗w\d%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J VwŽI x"sH!UtƱ)*@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V*,ۍI N(x@0׈(c:x%@OQ DQ$@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X t@Nz~8Taz}۞\P Vy2-!qL%F07g #H/\ ¥ \0nDp5|es+2YYyǵW-#+284M: "k+I'W&b ʎ\+WdmWMRD'W6Jǔ Z9Ew8J<\&0\=A:z"ɵykMϞN2|Y|ҡ S'!7/WVvRSN9>z/L&IJ0Ai2Wfi4kǣ4Y"J!ƕA`n+VC+'WQa KGWdc+Qcӫc!| z} [_{U]pu?+?,R+pЦRFGWRҺ++ \Z%Y+%6fDp\Wd=x"+]dzpe4R`7d Ia=|vEV*WFh \E?"shAC+rk!Ӂ+\tc+P"s \A:\w WOh&;&ƌ\/W~9+RYWNtl2?Y=Ɲp,MJI-w{At(}HzNkLAVk< ?Ih >;^j z~8:VZy-ЮuçmLx=S5-[ FUW- - :b>mWO_V 6cck}~}\RSWSa^ a-Yݣׁ ޲R r{~V(&_`W^l S~d7OώhYa{isgRt]LhעMmPKCJ?W7:n+,YdUqZ%ȽuU+Zv>>yoM4BUZZT{Aŧc|?06NoKY8aP]4Av511i`ܜM+۾k)"vVG5*Fr+ cn9VoTg]Dw%@xQ#kD8M0L!kߒv&"AӾH#Wd-Y~V*ʎWhǤGî`Q/D#+Y;+}Ǧ¸~oͮgn(=+pЦVXFWvr+2׍`ኬ<< W \E!͙fW_YbAnG8<{vt&]y]lmU73 ~6vU }"Jh}Vt6ZƩ7,?x#<lC%GY|z''U|!fAO>-U?@.ņXug gjm.DTƫ'I^6xY%ZTlRJIC(RV5FMsdOz7gGH.GdoZО2z\&+㲲VnUcmϯ ~ћC׻e74_g!q-)}9zOG~1}zپ.}[WO~ҿ_5xS0{d&RkLOz9:[._/NONiFӘ)љ_.I-im}t YȠM(zB_B "mhtOQ?+2/?U>"{Fm-1ser;g6l9ZC짎E EOochER᪮Ƥ(αЧ^ZfLYc?.^wp˺Jj?ۯfq8k[piP>tNQNޞd(HEL4drnT}ٚqr6;n~"S[M>7}t㞯P4ûޭ~3|w^x#|pM7.=ZlNQt&wwLՋb[ڽ/WgA)* kF7OͩN4o38}j0P+{h4( [F  rZ.Ьm^JM gkWh$vdZZx7wآDYBU+н29_Bw73nE|qTnsG⚪dyFiLk8pĿZ~<#|Wd#-ָב5dp|06S Z"Vuӗ} 6iu}N3.x:5Z`z?pz^ e}`PQG\zUq6Wkkn+7o}8.u&ʼēJn\.H\S4/"%)P㩱%n|_w8ԏLҤS*S#6XR }$.2љ̩ܓq;E8]^VΞ! s7 Ks Nwף6~C?kpt}^LBd6׺| 1M)u'j0EM7@6>xd{S]Mms6bY> OPU)" "HϙhV9Qrܝ':b y'lC3nu(L/geVL/i,$I2{s=J{?< x+pWٱwuя=Bp%Dgњр^ƀFkl.85rNi 6fxYm[Ƿ_/Gmfn"'J$2ebѪ|1f%O&G0 $]*Jkp3)asTjIAɮZ3ɉAR݄uu?q:wmQ#QTqQTqBUE1*7rgC87jP"ERhϓJR2Kh7ud/fFm:69"AHzP1vJ̵?4SBLpe<뎍ǻ/rm~EJvk%kd,C*+Udk!hE(Imfi Z%M&݀ɐu.R:|+52IxH LrdF#S"gtdeݔL e )q!X`h@ʕz6HԜ}-Dx!Fպ#dMOlgu/=cC[ m?Yn J3]oy-ohYV"pV$Xm硘cԑh8%C WB@Td߂bj|eKm{0XIQ.,Q2#+ r"S: e $"yeIO]O{1Ns**H7 ,T;8-a5-{NA!shc=@tF2&>!HWFs/*r7nִ qC"FDixG)1ĠH3! Y'Hu nG[Es:imb"G0A8pxDi8kUAHrY(ڊ2-~ux 6_cwi߳?K;`~P`4 c{fRl>!軬) &ήمRrg~2IWapw6:K̟-%2=2"R>tUicwq;>~n Ëetsi9b ݶ֩/w[%Җh_۟kлwGTFҺWR3ܫ5*ǖCnh,wǟaT6yެx1ѯ'?L/|Xf:#iyпͭtsb G_n^|巜A֒-նff.$1 G~,Xvpp}t4ڪ`[] r[ƪ83bXBFJÊ_Gc,ULRo{Em3Q$;*մ&9?=aH}_~ϯ?~\?O4ӄ)m}vpxF5|CӶVޢiia9Ӯ9->ݖq$/7ƿ'XgZGq3O*.ړ{q6ґ*^aS bBTَZK&Ix<:ɏ7q$~}[BQhr*DΘ#<$bv:ÜDY.&#aVz8EÉ'>j".3D`(<"Hd Os`%|:U|:4>4kāř{X=#+a;bKXՅN؎ۣݍbHSnuuщ*1x8}&L`\=lX2a͗ûm\Ho}ڮ.nذ}d cpi6e×e_Alu|H⢠zi؛RAҎS @zo$Zpg/p )}dVؽ6@{xKO׸ж}'p 6T>xߖkwlk?l;JHKi 89-W V*y9.l \ > =jȴ >/?D47Ls&RPA`Rx \{í6<85SZdl'A]Y 6 r"[ԑVrcJ2ŭDʧ(xeZqܐ}=D2~I$**P%F{P"17HwBy,ɕA$PYٙ/x ANIR~tG%#0.C&B)T !F^Nx#w(#KZ8xdDC Kyel@n "XQ[#RdVXN賊TK`:Հ}7rɢ'29wЊv:A}"FaKV*FB'c'$c4 6+a/7n6u2wmH4v.m0`f$ؙl0`1 >m]dɑd}-Ce,Km[J@Mɪbe7~<-Ҟ$$ @g ω`^^'U_pt G'eo]ʀ}5 K7/7=֧Q)nt& e`@êXo3=w[wX$6AqJ ":.uR`#[$Uۛyec&xr>ԃw7q PM+++?޼+ Uj4{ dodePϫo2(;NSFg5SR/GϽtwןzKԚ+[js~6mHc1F1sɭ`)$ŤWdnsub㞵 8:@ D!k52Peu@B JH6)t.AzcZ1%g%ʈ(3@"qJ$dPzqDJT,JۤtZ>=<rR[~LGF_.#VC &xñN1 r @'O Pd1u(xb:`x`@QCby iJ"<X}>■-TU W/M FmUt6^k?FKrVCz4^A$-}kWpy7WZt!Fz%o;pJْuT/Wr/B̵VhGM3^^{sW"WTP*+}ڮuE[r+7 W&mvʦ*+Rڄw*o-:Mu u$Tfݬ*3/վv(6S2Ӛ.Gϕ-eLr7*u?{O/S oUcZr%qQE2[zIC\5XT`iw cƈNDk튬+R\,M(mPl:5 4VYű] ' :")lhez]8}{8-Ipvg ˬJ#PaNc P[H1&ZyA]z^@ʎog"Qq[ 8t %JR[-繅PH).*3DHD.R*QJTrՅCJ Aa@eb弊rsd8~ф(TDCZdA5ӯ77LʝD.GӉZN:n`Z I.LU"]D-WJ*JI Lq{ptg*+ٮUuJI;+%U"XD$WZP)ꛁ+Գ}]=`%ag>bx火QɷLb+պSbT\ɝD.EWZp\}pE)|VK3'{)эЏhc0R`C:O_O46S ;PJL#M)őQ?l9ˣQF:$0 uԮ siп tBH5hSn2PL+ya]h귡YsPʹǒ^5? p&iL?8LfR svVB}w?n09f/L~ YuN9P:U頋Ud`Aya#f t %K&}3 ۿO/C_ջ_%P0 *:mq )*`W\ ƭN^`Rn 24>}).| ٥)@(QLQ Ӈ[Is%甦\nRd]0pvm>]:qGިQ冸j)Bs4W c`lX'c;/TO^V3׻<v 㗯_x7^)&oǧ_+_`&Hژ6" ǿ>iV^4Ul iՀҮh ,vDI՟?إ@IuH"DujEK W|IVFvUU?SD0(r Si J/o-,j9;OmmA6~;Ax[ Aԉ &sF %Sc՝ԛ 5| -`838 IPI)bibbjAtݙN+g:[s5X3\Ck.ll}@1\SjUlI3(fpaL ܺN=)g0=7CWz+YzMZ᱂IQN)93ZNA`L6~)ȇB{٬eEǐ=5{)6BF } ʸtͻXٿ2fįWZg_O"ǧ U-`k U cW逰 ߜ U#̪Q((9X$"Rf@;{+ a BG-ʿ5nU1cqThhVFudXDh1rbrX+B"VZ#~hICZJcv)WFkO `,^;TE9٥w@xg] |GMJu{ |RuKrK0:3ο|&O-k횔5RFFbd3'*ZF%|ۢQ=Q֢};Yd!Ts㵣l6lR#,QcŠڱ2(:o;cƌL"^ˈiDk45[!--۱5E-]O3u{1+5A䢧!PӖ E4= +)6aIRLzSNq飶ěU?]r#9r=[-\2q׉u_JfZݚv݉bQ,bzb'bV 88 $N:b]2TmdadUaaq,Xh',<)>yTm0m˰-}Afn;@A*1fM >Q e bQ6JTw %sQ0齐T:( 6L36 !eyZ˝Èbrq$jKV;{jO 7fӑcb "kC)9Z,e*%2- F1nlQiA!3@QtIĂE15|Y:i*a5waoRF 0'?ED^"AN `[!C$ FY` р'B`qLUD3Jj$Yi. 胁@(𡶊M5w;M=ט:cqQVE1ℋkmT>  } 9dzh!,MZ('\<.c*!'ݶߗ#՗6΄/tNh$MI֖5ُ=;yzn 6jBs33U!2;64ȦoC@L t;6%B&iP'$E܃W"'(h&3E hv Yb<eL "q+@x 4`%7O&Jk%J8LfVf]h|Q#?ߕj)4<܅哬NUW\JxcY%94^: z벋I!,}AG'AVWuurJ8~ug0=}nVi/c6pLbW)IǮ-7:߯w\5X#DU jɱDM43Bb!*хbD!EbD<8"nDϚxnTDyd"3P'LR2 Xr(Pw';A4۔4rgEM=D%Y+n"tSe_|ެS~b9/ѐm\n 2h2 RpB~{JF.`l$OxHQG|gq I]΄*`&S(ͽY1T*H&x]^[\H`F.ErJm=X"g9d C9=3PD) ^wVsgn5~1 ߷vLU2/9hg19XBH"wZgI8e@N"qoP1^FNgK.(YyYYRh).LRj7مR oDY1&ĢȌ0:;HeXJ{zyj;F>l41f l 7@̹UVU 2h Q Dih=P6'wjL^ ϴ5jy!?hx@hoVFi0QXgLɸB*IK.Myk F^De93agtĔV>Fiٺj ;SWed΀5~zj[c=v~vöefx4yZ!'M){rN"J_ 0X"Z-be3hBN1JMp-5 IG?!nJ-ɇCu)v[koSVm\{3v }JGPR *gLT!,RH;- !A,Vq^虐WC%cm0 9` %[@)FR|U{qgӚOSK|: vG}޻ƑeDDkeB(TS3D4WF2b*iW ͠oKj3fġCNՎaop_ewhhEd8RC.>^1+@f1-UTs8&O0v?ăC%xy4*JsNo Im91HeXVE]/`wX^ԉwxu#ee.Kn '?i>qA)x u8['p~_ifY7?h -]H]Uwo3[җ77rS!@9nCz,٘/od]7?/S`=vʬR\[^k^1RZUW1n MZ߷ mrWu!aqGsTdId?ﳠ kg}&x}H?7DO?yrC"NW9rD?_o "R/ȽnX\ ~ݣcC8o@:_'yYrcmW±>rX@6`24PxQ4[ʁ;Yiar\z!*dt}9%Ƀ%2Ƞr# XnF5cE0D, J ^!4˴z?"ƑEϰ(R/=yj&듩- [-GZ&ihYԨqtIrSldBu22Ϩأ(0,۱QG'?PN-Pb@5)H)$\", F{)[S^RK=)~MWJfEyf2 ī4Zue|9!0}zkj路W~7/= ʎ?%"(ڎwbsIbav'w8]ʊ`ug|[ qUH2vͶ8`ޙluu")_Tq-mg40jcc KJʉǝHL\ GpFTO8dD|6&! ɨ`lgdFBR[2%"R ҺYFE0f*CV+CE(dὦV\{!;2 =n?QX6E^U٧"_Wn\$%((&ѵ-Z}Te]7E)"!T؀3`x0Gv]m(3m]c|5^)^ŷbO&yV V M}-~Zz" o XGk9D$Z /0pJiy@: 5# *z~(u\?ލ$ޝGރN. [[dE]v ahē%D;w3&+FG5})i@Xz&#&+뾔~qGp~Y(INr^_ r.elM/ީtTgT`TV_LMRxl5!M5!?bMH4X.R3wg*_[ B엵\dxg\ dMArАb)jʡiU1TƠ`}@/RD/g-"}sSqӿy [O2/{wULI*0IE)Yg ce)(t]i淫߷ๅn^r_1y\aZ_5i>Jo}L ͜Lj[2nkq10]E{)0]50v&/ ? Lkj-Vuu7kqٻ6rdW|ٻm>dvY,&XY$mmdcx~-$?ղ,m9"lV8q /q{u Zr < u0 b˨G'61J?rcU {~>iSW T72\2N l_L/X =?a{|psZ6oTԢ+4q)eh;8sS&90[Xg uo1d%BZCvHqNz^6׹$i-~ շLg&K^')Gmt^L5ƿR_jK/5ƿc1kK/5ƿR_jK/5ƿ<7y+yk潯yk TV:fRtTG^qԷO\X*ʪ(^zUu to ]'ۊ i9mf(K(>^x_eϯ7#EN_٥;u)2|A ,L`_Jv"c) 31/ڜ[/+˴m y)|9%$9+݆^ mQ]-{60" QRMb%HHzʲ@Of@ d1F0)cY1R(J  \A(E=]z~6dEggSwaϛ2kzaL-4ηVny}=P@,%Y)rB,qGT'͜1ۋI󷠻iUAOKO)@7Vۺэ27klk[t.w.]ܡter7ҕ" a?ǹѡLșYg=7A`@!֠;LAE_`AE3xZzB]3-coo*FƦl ]j o#:=+ȸczvs*Ӌ)s.&_G7n3"@ -M: pBH9 , ѲTݍ2uC!{> &CR)A AC4E1)34݈Ĺb8^b{ZmծSRL&n E)]dPsfD42kt!֊~}Vx Vq LȐhɐ&BiQp$ >Ĭ"C"3&% bw4>6gȫEqՈ " zƈ 2 ƃR!0'ŐEkIГ 0QֶW,gL nH6\F";b@[΋МE$&-Ho8`9Ĭg؛8k&zՁt{s:{%Eٳ].VzEP0YU($Ld T6#+vaw6:6г=#mo*Q*'Lޏ0||-eTW'z?Nq8kPUClʯ b 3TCGr㎐n!7GrHGI <@"'(H}LPYfV,1 2C[mݽs[m|e؞JsK#oMVK?RcY9.ȉLb*8}Lq+!) zIYpFg&9Gu߀2_0K:C:'&`GU>YFՄ&gizrgNpΓ-?NOO.wZE nͱ `Yl43X(m l(!*v@A9ͳ!5A^("H I%)Ԁŵ^̶ō+tmrD*zGzP1vJ}o%vnw nOsztssc%rNƮ}EJ|=@ {ɚ:M3vT8C_/bp[#>Ѿ7V F*#i+)U(N*.v~;y$8|flOm?==^vvٝoN]fK#>Zaq g}+U^['_NۣQƺjY5uU#Q >X>4zb'ŢWuyguȮZڪ83itL )+>?LN}6MI`2ەw %=_Q1YaLJ4DwP|?o?|̅}[qD=pP)Z_ܩ/~y@նEU gN&w{W}dvSn@R~rq|?bqXe_j֜mUcKs .>BbC΋hT$ؚ+>g*]ڑn|d-+CٳyHM~DH|gϚ&8B%T1Gx6Ie]ەsJ`f L>*GzņkZcv!X 2:&P7`d6/E)$Rs@PCcc6QKt{r OgvޞknEɧymQ­#Z7'uzQ}(!Z 8_ckiy]3Wx,:?"q^26xVU;uꥒhbq%^D+ ʐ6 iBdv`rܖ"$BbXjax@ve' )U1evRedŜ ica#o*7'8P-r @HCw|JHsz▗zbu1?f^KԈ$PxBĐRfL[5&EDv+ۻ>[?*dLNNc5=ZollȠW9@th`y{]S%͟y܏:PmUZ%תAɇ$}moT֩e f? ~-Lquu<?Vv ѵf[:5\~KRE56mZYxfLT/nkcZitr|z]]Z5."ggE^o\7UYO-ogt9 *ZS.KqZdw\]Ϟ%'pT{7aBvmoR祻MKk.f=cn6CS,ٻ8vW}JmEY`I6$b >mE#FbSyh$K#ԲmiîfSŪS dȢ{' e ԣS{?ƘPIͦ~3 2:zcv~Z]ڥإ[vy juba{w KA73oTFg5PїPzRު #r\N%d!*@= -*9(x d,ژD u^D|3ZI`* Rپ JʙHhykq-iP8_ch bfQ]U"ҔONJDFi2 Ĕ)1B`DhFg< CZ)?:,&'K![X_Zj:.$@1Inw7EYxm)Dc%e 2ΉElKdN, d_SʃP軑%=Io LƢ|GMB(Vi^ޅq h`0έ߮A[KΠF@gKYγtBsbҢ:9O! ݅LcJ#l{Vr\r_dYQBqs$B(۬(,5+d,&a1S {EDEl!zd%UQPmr(4 4gơG iHM[i<5Y WۭnAțkޤJ#܍ɴh8H̓l_Jdl2yZ? N?N>k5~__ 9o/Xk: b&L\k_q|yFJ p~L%3Bd}Rȑa8Kz0t [nH?2oi!rkOWe!viu6a"--Q onkdcu2E&O /jq亿߹r3ΔO|ǓYXYj>v2,S>l*sEPA+`1:JQ+ B,XJf6fYCw4Tku!;ꖢ~H%Oho,bVPsۗK<%Xg{xM MYS+;$;%77Qr@QWě[|#*ѳǭ_J'J˲z~\ხyuxq=#\Z^|a v3%B89 A`KYAjJ u)Vlj&*\|rz<^It :قb2Nфy#Bx gC5/l{Q7;K(> E"dHC [uAe>Le'˫ yrr&y{B=ećAurӫL.&Ww UooOg"~_yOz"RwzxAz{p ٽ+T5Up?B7zٶ=t3n6]hDr}ۃtde`/F$!8HtH.Hr)qQZ8$E$HIŤ1CcAL toN.qtE>R.n5 B1W΁hxB?`.с(*)@B@N%Yr)()%JDm LL.&|X&Kؘi xsG-]$Aev̠3:^^GxVQu9B^P %)yZ+m)1Լ͍ MYzzCNf@N^r.{sos`3 v<`3 v<`CiLJƤ1)}LJǤ1)}LJ!fCj˼#J%Ļi|/4m=:M[҉iG:94FCc94FCc94FqZgh[^3E7RڠAĈeA.Y 1ӡsVcIz4l\ʓt}um5xVh /T<Lw ZF [RtN L7d|T[`XcuQu>xx`jcC\*"eSt4U)lP2a$ VyŒ P*޹Ϯoο΅x8]7B1uYn`^ImVfu% I{3AW/IޕQ;Pс.lyg0yYtdJ`#PTkRU7^ٽ`\]ݖMdkv}!ۜmbWYē+h_wyq3mEJl9 Y]T!r$,)xV;Ny6s 5U3qBZMnog&]8vYZo;^Mom}2x}o[WIY||޿۾ϰnxHrhӫx{W M?76\uܺݬy8'.ld.;un |G+-χjmoquSliG[w?ꎎ˽ +OJo95h/enMk[;-lww`ݪ6ocKX6if2~JMo LvS/hMey 3@%yoFdW;U:U8Ax"j)> )@`Q JFLd!&{k}%/s>_LVPXg}HJC%ɌZKV&6]uR8 (.f]HO){!f HTZ!SVPlP{O7= |4f@jIm 6l6{]%=6{Dh/dŋ<0@1y)֬B jq+%|І(UOIT\D903B_XI֌٭af8TƺЍi헊֋4zm֠Ww\ch l^EDʨ0:%dِ5D#4*P=P!H%TF`%[ضDљsRkVjVL̡hf@D[ĨGf ЎBe6JE_Jrg =ѪbB$Ec 2#C@ʖuM"6()DRIZ ٭{V9O*ǡkD5 0h,*YwpRQ$"^ g@@1Q1m0;!ž-*pXPɢ,Uτ"ll@RlIK[X?5ֈ٣̯:^."l6KՋ^^ZLRcP@);tA$0L 4`QIb_1Paq>ӇGPa û1w-n~|G@nxy<>qjN=XwS1W0MWv@_~/JqwHh:XJ輒*Q ȳß|<,OT/廩RQ%x/U* RQT4VT/֭T`1`)׼c;=YOA)xڧ\t1a0l#[[Q4YH]=e2KB\w~#3 ~~ĺNۓp> O6Գt|㧟}R~ցJ tz27bǧkM[?쳿_O$ <-o}nSM%C7<|I,dAݧI4-A^c{Suǎ3U,6iC#J#*p#}~dx%>B&&a" HCAG#PNw}q8!:yY{}xz^s*reZ * mSN|ajΌDʎ?hI.bX)E$HR8чURBRh/y4kmIEos}؇;,l։,{$9뷚deZnrIl.6 g ɥc~!^rYS֪+|"c'I] 5&i^E^xu)hY(Q7ǂV*Q$0v֧t[9:<éCw&[S^( ""〺(ep$:CЈJڌ)=%&7NrOH9ED:A3P)c$ppwx{i9D"p_sA!tq1vzM@k'󙓱pKovui/}V~9lPH9J͢!ı"CĔVPs U[Vu`圿m/}V\pfJO^(x«4Ec*i#ҊR- B(!?8*H7Nc̵#t@ª^ZX{aߟw[Aet׸Oyż7 By"!iAIjٰ_da?G.z%6aNaQT %Cw8Σ OydWnA bR&0#A1cZǧ訊Wѭ>frkZ_Xq1wgg ' N qշ:ntr0qi7%Z?NkH M3#w94<|9F[wI2H Ž.щS|h@xv'j-גD+pU%o!ecb * hKݰqs|+(NM ]ީ/ztak%RQ?/kZݻpQAq\y`:!<CJ#G?.q8E=fo`T&$ԧ3w۳k ]T5W~/\|Xf[8+f4M'rs3kΑ|nOjr'qgAHH\t$`Hg]ðaG~̳wz91gŮsTFv|ȮQ(Ql5<{RFYp94jy_w *qM!66G~|qO>//3e/q"z:YЏfG1@#3_?uq\C3V 3,fjژՖ[ F^~{? }F^x'fl~b>=y_AQϚ|[5W? Y,*DOb#>xICM+UNzH}>Dvg7n#Y90 (!F IY½~m#E)FjS[lv6,U^UKCt8xOdDŇ3 PY2h܈LN<ARr$Xg:Etqqyq0WK=k([Cao:+`#̬4qf+I SS41᚛mT}zTvǣ|-Er $#CTGCF$E-ܷ "83 BPWH]ԉʆ8`͗er):]Nȝ/By&$1z.N:TRIԆ|˔'FI!z ڧ#:J@|޴2t/fnxcBn|edbWϼ8d  odŴ数U:V:fjcըjs[5٬31 En/'d !<$# -۝2@91IPpx!3⩣FksN_"<FعBL|^qe^,o][|23=A ?7N 7]W ܁/yT]o|pH%Se֕*Ф!*Np(:zlg9 LH[GsYv&9SK%L26wjZz,dQՏ6 og>\距e+Rd4\:!ws #!_#4U0D*zYiL%Rr,:t[1iCفGNt5Ay\z(*@,'T+/2IATw1)=mb b8Μu)6a|LnwB"o#8>^n`wz{zk"?}9̲`\R(CN"Ih$ bOZnIʎܭ'!Q hOhnh:D E..Al|˯?mVc76o#ZJ[vcw?xc0Y/PYۛ \SΕ̆y0XRV " U#BD4}:XzGB=@\.񍁰w[Q\j粎5wO8涹oq.[6'ݖ\Q;/J`nmB|}ӺGnѶW;(yg $yUOmi:88߬5є qEWw LwɦZM{uCNcZr8^yO,o'ٽUulKS!^'ny7a]9ux+k/wycrT[kٗA5量ݎѪ}ǀ>c4HK+~$OP.c4tؗSE[{v;IWgkiIj/IIҬHBtKވC3{>.;on4=K5/YhR{Ou9 ^z3/vpz̈́<ݓ.lyݨO̒+7.;&s1'ޖb2DNvăNY63Qa=8uȯWoxn*Q㱌Z&8/V+-t9 IKibL&O-JX/r>:?@PN~@B"ހy?xaD,:=kG{F6,S9\v,2NCR 19c108P_6=2 Q4ǟBSC!QZP 8Kb~Ain5CXg<օ/J-NRX|ܥJ׷>jѶT[޽Z*VT+QC!sjC**+t";呀HA &Qkm#Dz/]L|\ tϤ1ӍFa'05$,e$?l+IN\*,9r*OMYrAЃFL.1oJ'-=.@od9[Mꍫ􆅽]nGY@, 3Xl?lvO0e͝Pxz1_9bI4WE-PFjy@RBd5KB4*Ua@PĞO66QCАpdv̺,!8#f;wq_͹tI}AFǎ w2P[aSRL"7&E)99 KjwZѯJUH!3@*:J45 Y$(mDD)"j45~  wgah+0Fd(N*sQ,d-YBdD&NuQ(iQ.ke2 rB5'ic$%-Ho8Db]{kxq|?Ok]qQbޮuI BD˵ ErЁ LKX)eqyx}FǮx=!솇ajz5s佫_6>;=껝[7q 'd鿖> \2ZdǺG5UM?kb*f"҇f!w玐n!wGrܩ-;BT=  h') 2_$)0d^+'} kmkCBeB=zqYss\|w='_~aU6ᇵz ..i&eMΚ9Fs6bQ+ _yB՘. MV쾷]Tv[QJ bEb Kzj|fj ?Z%n W^Ӫ]C,gy GTAv?DU&ثF \[Z/)T4lTR?gՒ׋K&}D=OFJV@ͩEZٔZA+j&{E ޭ=ZQ{SR.қ3Iyj ICqd`9*Rc%oi%n˩oGWEñ*yGRj^;%&-s{}5g .oq~d=}<,84/uI6Aw׋<&it 3g Օ$ÿk*VFy k0 4QgwǤtL7۩}&R~6:N,32i1" $H1h_8eה.$YK+d:DPp ǒC2X!xKtOJ:Pмq<'R5g}5dzmk5%XӔB=?"v]}|#`w )Z1,h;^ˍ9M:"fM <Ĕtb3i-)qɢD+!ĨmVkc.oMt/+a%ufaa9@4XNb*[ܐE'C@-UM׽hc3 s6֭M/=pAEiLsP4"Fg$0.BAƔ-{e4b2痓8t(.'H.&Dy2G)ٱFɐX# _C̉#9'ZJ^EG4A8pxHUGzh B@w²4||Wd?Wg2F?.`G@l%)[ΥUGeWTsT7M 0zZ|ȋ6[W|4Nd"8IͱyLR\}eAʣ)h)~>dz0a9yEoݹcƺjY5uU#Qt}!M86D=ئ:i:_:z~Ul4Uk]=dW]Ϫ8kSg6*>lG/Y,Eϱ!?մ]䟟Jv~G?F*ޟpC~xs̅=~߯߼f?Q,0Ҥp&px@>~m_US}-ϧ^v6>OgȵrmMy,γ͎Gyz=k{?X@hN+l2g4ⲈU*jg+ޖ GA񒉄ޒx׷sM|F⏪5ĘBr*$Θ#<de]i@5Ka7h^lXA^X}\pX'M%Sp U3"";Ϭ$7P,2@)r~&{:UĎ-7{@]wvc;xlU(=en;FdB]pL N\6ҜzBWiu>ifp8g@Zm7]h^ڃEIKZ֖'u!+f6N! Ts7Lnp .&r; SSEAU8t_ /hFHm黷'W߾LMFSھiYaJ^OܞK^zW=H_{>b}EYk[0Ѷ>ѝii%z4unZ}=@0+۵eր^A _k΄Si [![tNݻT w?YG8wYRň)P:%AB,НIIJņ2ĕ!LM˽xc*Lw`ϞD}>ARGd^"r#`E?jmK,š}v*qjS=FyVlW [X'03N[N#d gP.q``$GR5z#a{t`~`ktۭNl`[4Y$(ӂ?N 9+b"qxj筭Z_R`Ű):z>g *]²r 6:`ڀΗeUG;"kmZ=Lxzͭ["IwHY~{'ǁ=;*|H< F][] 9KQʷ A27ŞoԾ58|+?֦4Vx5QKu.fYIe9/9}GzKH/9[8΂fR'nlw+'Z}w/$7*]Wؖ7x-f\U{RpE2d ,l rle-,gLNFQ0agH.ʁ;JTA1#f<%i /m!v8pˁIHa@@8Y+0'&OKDm.b6;ZD HR%g*dM 脖2GdTڤ#,J$!PgKG3O?3-N=3 ׆< ; h`2H)Cj[I(Ɖso3 㸌[89xxC^XZ$sLȔ=){PR;)#0\[Ɉmro1֛Ԣf}Bkg(@g4'^iA=waKi 0c`D6ޘް)>(9TBΌ>t3<) N4TI*%ʉ66co78I/??_ŕfyW%IΦtkzEm^ɼUߦ)Tڦ6;uиUئ-_VųM fVobA]%:Uﵣ\ ٻƍ$lpn&_{AٜGJLWER2-AJŶ1ͯ__WUWWm5v3w iZӢ{GuZnWYJCmҮ,_Lg_,~RA/bV?R qHm>.Z-sY[C5MO{t uώuatZ #߰o醺_>]A>vmgWm4s5(d_ "M8a04/E .>Ln7i!j馻QOw&=u>-s.'Su!mNW.tP]jʠLn5g&f^VCKu`YEӯ#>PM3rY1xf:ǽȂ3/h#L!g(Sv2gQUV|bLk5Y@2Lgp\52!r}zMQImsݖVYR~C 4uāٛmz nܥB"5J䘈Ο)x|5 r@dMA"'Y]5(G`FePyUB $ ^>+6'аLB9V %hFB&ʙ Cf ,%v, 1[B!4Euҙ<71uF@zu 5*Ţ*sB>axnၙ,p#5yACZxo-5J2a!b9bxxm=>ہ 5ɜ )lܘ5$-W3Tkʊ4+b *xZ:OׂT r Ӽ:Cyz<3#A nbOҁH\V} iwLEF:*TxbLf6-d(?f zLla,RX['!!9 q//kNe%sK;2D<YxfsX%-@mP%FUz_UB7+[><^Ȁ?=%g!.'߇IrY~=!p$O߆N.] }56N>L'[dg\//ۚ-)-w0缟=qg= -j`STJ8AYNV_ (D[Vx$A.W2[2&HYM4 Fyc#t HxrEșӇɥs<.䤐2ô/TPBHF`ؾ`Y&!78wRT#NpVqnMqEp0[ 2&=eUL+]T\ UY,ax3D>>N.P[٭T\Q9 cd{7.^3 ki'JE}s-ϲ'RdL89*6KaOo!1܁fqc(DdB-x`'Y+br{ r+3\fH߇y <zwEW⒀wvzCvw6SSx_ oV҇Ef]+XfQTx3pos΃͔Z-`bA#ՙ~/2KɜURu. VzlWL՚1/`QWpopr+XdztQlz}v7C53i*OoT'0q-1hGD*-2chj4u)2<,#x"Q\s\zTHI&=YREnhMEH,B,geB)WeP Cp!(YGLf !Y]Y*D Rv,w*XH<4Pd1R=l Reqe/9U!Qے0S**[S։rޅW=_uf]7ȝЪ.w(̯ vZf򷽾y4Łz [Zwys;bSˢmc`mO*6WB{jXF\9, h2\.x, RCP(3\`t5 m9~*S9-ɻ=\hv9`Lw%!<_d6]TEsM؏BL)MM!xPÈ֜iM#`eikI$h |"DHHӆKUDt}=%ft(9 cP,BRBW֐TtjJzuMcJ:t Nh硫n(]tF:t詐ֲ Evt(i3HWCWLXJb+i4tp_'lz؏tu1t%3FDW ]!\c+D+ Q.qǴfǏ NF LVuߥJ9Y+7d=LUկ>a$gMGߴ'Uuո;2 |BVCEn?? P[5E.,gwOUni/g zsz1&> P_ Vө.[T΢U-=Mв6V0N~⺞ -p#5DP7=Q(O\ Ɖ4䫼f?hw;T%ju}EvsXztxmmmv;|Z-3% ̓j FVjRQV@YMX(Qi7jN>?TIMȳT񂦘 =4ޑ`H'm=ngxܻ*Χ9 mpg LB 4ZvbIj)f0^95vA;XRnzӦ?x^!0iJg `5ʭj `*">ۣiri*6t|z$"22OYkhC+DiHWHWVsMDt_"^_tp| @ Еn9hE<;]ul\J,]uCJ90ePw+=աCO/_Xh *b+D QJ53i]`X4tpzt(HWHWHAc+l+LBIQ-'C+D)HWHW>, N++L,th+DiHWHWb+i)!.FBz+Dٰtu9tzW(PU&ׄkz땮esoKX0 M#\.ciD+"(F]`zvnpy4'VҡTb + ]`nL4tpDўYxBv<}=teZ&D({|kunמٛZ$]uC)t+3աCOJDDWX"i4tpC+@I 验{J#+&bANWj2J]`++QC+DHWHWnc+lMcae+ѳsi-rg^$ؖ #{rh9R5a=v/{b3wtK epK %C嬄+%Nҹ(tut52/-\BWmtNW@\gЕ~WJ=uu`Rtu\tuڗt|S+t;#Qy `cbjuhotG3+cKRW{x梫Rj (RBWgHWX떤xIkCWmWo7ob?!~jv5"ko{e_uH:S`?vvo#]]ݔI}nDyaYw[46]67{r?ݶiC7VT.7'<ٚhywگ1{g_/_rz?sn}Ԯb{k*P4Rf񕬺ATiayB}* (_ggt<<!~NTgkg,zg?/4tn}B" TĘc ]2C֎Z-hs"tDK2L~N"԰:V1J6ׂifs6E}OEײұ\n޽p I5K;Wj6ևUjkPRRMSjJ1'Zj{B,=zɌaxhk#Mj{))joHDk3E2FN4Y׀1t::%,h2ѡL2:玡(430ИUΦ;^bG+ o>c/ uqQEz&iKkW_7YqT)΢x@1#3X2!̗^ hI9pVU\.i,_Yt2{u3.uY۫4CVtS۝n4R!YbZI*Ǖ4Њ] akNûlj,sKsȕV^z[e{5u2kP6FQKVMj)` s2Y<6〾EKt)&pM1P@+j{jQn)䖳  5RR DL QnJMhIpAk$+om+@ٍ3Ktŷlz~Fk$v,npHVBFb7ek$t;ڷ(BJCWR JzBmիSْ6צBѶli] ]qe5ׄrAkx)t1iNWNfDWX "\cJ+99vD+% b 2Z ]!Z!NWRNt%Ҳ%] `%xg[ j%]RVRJ%)KO=݈)Ҵ6L`lh9tpy1C VԶ+YAt) *&ghi;]JaGWCWX%EAt5WUm|3@i6JlX7 yOnp͑sW;Hn(u&pJttoSLAtB] ]qB6BZ] ] 3e+kl)th Q] ]I+)w,.BB꼫S+E_ycCJ1B),i}0()$JS-ahGgVkŕblGZv [YZZV=[_@nbbI7Ŋ#ÃvDBOة i1<;5kh١a|uuS XrR WRR VC\tL-1Uk5` o@kg:b`Vp8?i|5zQh0Hj.|jػ囮4\.^?>|-9u~c ɥep4qΨIٵQߍG>ׇ7w[s 䁊 ߋ?~ߤ=wX+/?,tw2CAb>,wBj3Щ'ś>~7j{o\ ~58?p]*)*)A(MYTKk눋%RnW_(MR(\W /1.6:AI kU.˵)1%c89=1BI!K`Vj{d8.{ ݊nz4臯+1^tc~i2 ;dJة/;{}p݅C9}B]Uߎ7Wݫv޹RÞ4B#|ɲ>6 j o-E V$fd+g,cBHiq(*p&' Q>p HiRHB3̄zc89IĤ4jKs$m18M"ORDLNg:Iɼ#MCm$%bvDEp6+ nP4BcT!':y @˜ O~ٹlY.HPPmWbTz6٘M&0{"ا(͔Zފ*uYH Bs>YXll6;6.Gw2ǝQeM9wjO<3}} ]{Bnۭ?f9D|'zq n0KF!܌)^)̖~Ir4ܫ+y dՏi8O-գ,JY轃A[V ~LPt0YF_Wg\~^^AsfCP@4ݺAjXt8uN|e>98TLB5)dYT:p( <xƋ47 Se1DV>WAH%Is6gi? 4kӃ;H=2Ŵeg}s봘>/:(xp}2h*RYJHw:qm"J+΄Fiau-<5$ILl4+wCi Gc9$}!:Qy "hbro8+="5Xʠ?RAW矇GՆniҿM^?~O#5xn>Ial&֞zt&,E!-v Jɤ/WFڡ_lyGG򎌐Z BZ6~i"$ʶ9qI,y#T /k^^vw|vŝ/efXwskua?wU"ktf1cӇ'sIߠy8@9Np蠸q*+2PY㞱Dir;Ҩ_<ٚ&"eN ,jS2 yDb67gˌػP]G;6f?xa_. L@rO'?9_[}mq'oGb(s,:I%18ꃗ!%p>$5zJrN :FxF@]qG 1H,Z!PJ<[.3@фDR5\Yk@1"sb:eiąlQk$'gJ:vTI1p.18[|gx|4LaNϛ` ŭVsxZ ZfDpJ o7gx[X[HiGLerD,#lwIw QF[c8;E(I #YHjyUO"vF4<讻0@:%uq~d(={PI2U,Jk% RN` 2ꬾ1oD}m] 6фsvryP  q4y"F5i8\\LYX&4i$Ki b`6Q $%wz2C`k{/?|%<Ƈ$| W6UޥbzK>ާaĿҳw8kb^I*t5uMg/څtwp~;6rwųmokēu1{'SܤA$ZčA."U$4OwĚ->/KǐъM"gp#mi" k _~:}oo$vqSKmKLwXiNL],a4z_ՙ59:k'zQ{~|q0aG]\r12߄_ xofF6e<\i4/@s-J?YK`04iCn1a0p._<\y8Zݟ}t(6<#~B9_R+.\n@RϫrogA%LM̤}}Ep~Zκle%GMd7;ٛ[gVB{~DAd"6ds_!.h'N+u0$'5ɷ7kOx^~7hUIB$|,RcC&7 ++B$F%C&*:kPCTM@nL7O(]]C. M7Z̲ 7,~(nqVKI_-Ձ$nˬ='i* 7ރ^E..췰+_k(9`kSS7%r,\KY\zl ={.:'݅1[ E)xPhx=-נOlt3sfΡB#0L3tr+0/+x)y i9ሒ0< ^12yMI MAoyt4WFPA\,ÑR-?K--![t4cMAR+#qm | gcV -_KSN`%v7笔` Y䝍)CJgMʙ$7Y'C%@J2F Dzch; xQps: !JJ3h1`:lT]  7-T 0B9PaPF|= ~iW ByRukĬ{ׯjH󡲥6آ0PuFM^OOg/\qMytAk]uvİ i] {zjۉ>-f #9| Cm׍.- EmWRd ;R=N c.<"AzWdK5 A25)+(/u¶-5"5@H2Ɣ 9"YRaXTxfϱ.tDr՗p[$󋳀/CγvLomM{wT}[Hf x1nb֓=pKG| Fte@h{ ?=a16g\&QWw-*y־{M՝o/nFvy;WnWɆ-nOycۘnqYk_rŎ kXL>ˤbyfPσq 4WхܙF^8 dqwGsV9]i4k`8RՍ#.躢Qhj؃>ڦ9hrl1heR)0JJU^%p0BPrRM! cmx\ti_c=tesoT<=FdMnoP7F d-^71F" 1S &F h4X 6(#iA$&yţt5nTElٮo@Gv:f۴8ί=|%f dQo&gיޓ {z*>w3P"̷xo9}hQya}u9vV U94/=ǧxg-i.Z=- NrU(UzBA@:!ޑ2xrڈZ^jyl&Gڮ8?l{ş^tü2*(VlkhK+M!Hʘ-O?ycȺHVo3長.淠 O3 \S %/W 6:{.GmJ1`=YyuL8~YGc"Fv&)uQ.Ť:Qg(R_4yƽq/j܋Oʳ)]tT1B@Qh\19dbcKA"xh]}kiwS;Dڰf}*&^%z|+AoN|پ#p3l ]t;[/9F'N3= mGڨ#Q$ sh)M511[d%3hI !c!R3=dT{.SY~;QNϩ$#'Ư"e.3sZ)fw䴌JHtM@DIZրL?Y?e %$V3J1+j3q+*]PR=E=EUMQAEUGDQQPT 1pc(geqse 52PFIa nh)Q15<$I, 1LNLmhQե8.3y7ȵ8ռ:?2Kٳ>[kZdyʭ ǫWT _=IJ ij:9ϟ&Ǖ$G^o/z'Shw5/{b&p!T  :+F)&>)]\MVVcb´m!QDa_ `k#2|(C+y=X)6(V^_Z?ߴb7t~o{+;GH3q/Xr"v vUK ^*]J`}*` r'Pmɽ.خ8s_ݭ -]) ë|uYA[o!o,ߗy/D4a/v %ObXC~}u//mnU׳b}mNvo_82twji'҂@ZzC=f0\VPZ qs%}9 J)fteHh+Ln u[slFWy[ViOSוQ4+DFq J#+å؊kt])%,bؕ#&׊V2EW3E@ ʀѕ&Zѕɏ]Of+\Lg. ~Zݟ)뛷8NW̄nfA/CfVo Z7ˋK>3͜5h`i0w~t9PK`N) L1-MlMPBǖJ].73fn2ʰQW=Jpc3cvF2u]eZ֛qtEz:QW?F'OG8c7G8G&6fG#tE!5+FLp]מ2j6+쎾n;ѕђL]WFSf]GWAҐShFWKJi꘧*8jBI!]arZXѕzJi&+$X:sULRt_;7:l&R9J)N]t,"yc:8LNN&Ͼ#ҝ-)0q;V\MTctS4kNt7FhR6ljPD^/߾l|[ov>VOvZ?w=I99ϕLN#[k G\cqa0ñ_{jq4Y+p68ZNSוQ2v5G]Em&avte,Ji2EW3۵!])0'jFW+Lgh]OוQ.yl4J]PWSK ѕ'R9*%Ғx@].63`_ȧataYDcwHc?qǎ iRk]=& ҐClFW}+2Z(-cؐ B32\hS/6G_CCR`vteLte+L]PW|Ʈ}jFW܊ue"f+ aCR`Ҍ 7448(]QW&[VN:wS$@{gn;hrGBLAV0Fκ.q^b~$1,y>a\vlQ/rθ ;їmx1f8ЂFhehhHʴ -qhA %nIW;#qEF˓_c-JCft3-N~(wu]NY;*q؝qGc$m:+](]f=Ԓ o}E+q2ʉչyt<RӕKjFWѕҲ(a樫kHW ,])n"׊B@'4Jqf+ bK2$ f#20y]%EW3UDX0 9n? 76+erbG /z]'ܐ`ѕ+0u]eZg+~p4)Љ5fZeۓѴkf M]Jp4+vJehEWF+_aiR-M10+jEWJ+n2JX:Czd]]ZoDWF&TJZt5]Q ɗ[]K]Շggb5$C֤m6zʫ W/_˗/o }uhϗonq[wͧWJg5Zÿt&7?+]|.@ƾ\߿}gwWZX2DQ^f%H__m{pg >P sIU>M^|SjWۊ{uUMmNm>]gZ'vʫQ+zFBErG~DzNWjx6ϑBfނ>"?n!^ф:\۟]. ׵ۋvb7ݫ#veCQHB\C Ł$G"lw{?0h7#q?|e_dhF]\_ﷸs  {:Vndץ,.Bv1 BI"zv|;郫cGruڶ#箫V]\{S;}DFp؏+8*"Z \< Re¨ϛbyppZ4CrI'j.͌=xMFM<@*);^Z#plA.o޼?אj_YK| X{E:bϘb{熔Nj ZsPP)jel'Ҭ1 %}6+~j,4.gG\{O@B-̘Џ3}ݓd:;]ִj#6s}䫖糇40NkC&ZbuȤiXfwV, O7' OdZǵY׆kY{5?d͐6i2zN񐰎V&RE] _TS`B=੫A@M^XO@.PJt@g z9 CTi91;9u/EA,86!^[G=]kK`v;az2k{O5ᢡU `%"%TSӶi"hfkzB}LNc-1GA5hlGqH\H3v}٘7uTQD%i Ȼe@ k֢ukp!QjE܄E\hQkCmA@וUgW"S^^-Xlw齰.6J-aC9ZP(-ED5U#(WMU yȾ}{UI>'J,C^ג }tX }5`*M赁{m@$z%B:$J}$ڃ®6 C4U4$'5T >V4gmLB %6} iؕB;PmPM7wI_GDqvSCXQT1C†hх@IKFѱcihc@H|֮t/YcҺ$kpb@,+Ld `htQg%hFUa[A[ _4.dj@kwmZ_vCK? k+K$}%l+VbOu֩G#!yyRDpq s8e`k$R6 :IWgTHHҶͮ 9RVz5BP@}S]Fڼ4F^}=թlB}Meb鴄kikha8H'؈^V>a۝VA||"?9;_&ndgz:3HSl-ЮZ]Z1kLs҈(cdBv"Qh$%!pB 6}ofU ۹UcBFy4Yċ!ǯ\p4F=FIm&hmZI|4xs C^V#J߯;@C4+rR,G BK5ea`"8FK8>mk/WH+EY"BiDd§W%2N_)3 <]8~y{2W˨Gt5<oB3m8K7EV"ʈϓW9V1F96#S2r20۩aFO'~_T;?*IDRFM"{Fx7a= ѥt v{v/ @nG%\ %./*ߠLGҍm @@kdE<{BpA]رYxA%K6^#&WWv5`3`cUR70ZX<# IđuUͱk,$HMVe !?Gxjj5#wGzS"#2U5r\+yXd5 󾒍csY*#dM'RIњ:Y `%͐eќb(F5 3KoX6RSצ^",f+X5 kӴגLЋ![ڒkhj;X~eyns24-XxGEv6sKLj CnAft,%L#FB`7MeG* 7^GmFƤ]kA&E$Qe5!FCk1ita[5MTpaP"Gx C ж͑%Fuy[$NuP˥-NTz R4* ^ڂDze)7%BqXo^aKl\& V|&`%H)Me nAB\nM Ʒ#:]V`77JQZcԦ"d$$Ӹnz!T'XW(]Sh#*j~ "{]!@̩ v #f{Fo -rz=D!KQ7"$;B0RF`)We t-oFh5ҋ7[[ 0uM-hUY[dڠ@v {`u-JZ MD i wXY2 'Ƚ kڃ#e(U5z(ЭB05= \d"lܫ:VIƴӥ>D 0زOUwjDteMH*p۩ )޷S@)b4%6Sa+Ծt`*㣶0j],OdLRn-rTOUtRx[.Ҋ$/6>M*TTʺ.hP;G+%G. &)]NShE:L~tU:^H:UCBo*fYd6GIgD*w,G'T_iYr3*i>m&Oͻm$Noڥ8]ul:5--m!|̏ҝ*yjOPU(dv&NG 7;O-rGJ:?HQ 0;Y J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@z`%Q L,J:iF JgHRd%2:VK)')>sS wJ4b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@BP\@ ?J l@+@)F@lb%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J Vp@F)))QօA >@af%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J Vp@W[o=:jpbLKM~wʹuir9Cb> @pZGdiں7%[fᒅ(Yt¥^G>K{CWW}+Bk8+E/Wt@$7tEpDW֛NW@`:@ҥ]Ҋ zCW+mo he`:H :l68N5l1FЭt9Lg2YAG^]ŽN/PrB VVYUMtҘP|T׃IMsiu7lTs*h=u~~9)ODrT}>xW}t07JENb0;<}dʵ/gWʜZuay?:;jeOBږwh1~<礝3f9=Qhc|ƴ-/ ^H! -E4ŲxW;'wlyb9ٿ1>m }Pq-ڝ [zp\ckrf9X;`6} #?Sa7%UE#ԆtNl X{B0҆|50<]rAj.ۍe+풕*VkP`:wh:g[;;X SmCXKzW4uN)/QKˢ,jAJS嘦Cr1`2WfΖ0Ofsitӯ K$&)ӋQstzY^(5\:铃T PBXQ\2<{7ND!`-VB.gsa!?^<P;+SUʪ0ZEp{mڽ:mLp'N`@zr.4ZW,BM1>\kEjaۼmޥVy1 9G4l2ˊS o+cdl(^׿eFNo+M6Yi D(*[8N6Ua^x{bp07)6u/)=@B4b{ O'[П<ߛn?]J'["]_Ч ]\כU16NW?];v>yÿo j7?,]톶CWt,;ЕfԮA)zDW؈/tEhKP:tutYGtET>컡ug:D8-u_z銎aGt+]_ W-F'`/tEhztZ[C+:B륒ބ"}:nh* LWHWJY%DW-]_>zPtutNׁh=Ţk8r=%>l웣7qz'zvVt_׿dɺ,*YX8؞:m,7(>r=:χ}i0?כ;95j[Ew~."EeR k e*59:V*ceRooi|{5ՕmKqdrjw{|mۭl[<Hn-ǎ ]wC<a&T%L'o Qk)o73xi'ݴ\y@:[菗/3"ʏaB?b"t'+y]ҿ5}?簹`n&u?N[o=Q` /ni 7\hrxΫV= I~,qzW~W~k/bԅ_ݼr܆p:yÛZ2mt}|lgUj(ukӐd6MKOi, Bl2hmBW//^O#TLʅ9+WD!B^&W8_?(E^8zl7yZMf`,E&l6x |2rv Jh;{s<7~ DlPp4(Λ.a (Z[4rEo$R7bc=,Gy.ɥL.Hwnuw0֛ͦ _;=Ňl=4\g'N;}3L9DbX6j(ƿn?MjOP;=}Gu,K,Uy,ב]wצ^Eg %t1|LM>w߿yy"t۹"X8`]mo9+}Ypmf`q,vfYEں(ǒ߯zH"ӶhIVYdSU,#eaB0FѢ| /tCb>-vߖ`(Wxo${5p|eFSܯ8RߥctSt]Ѩe4AzE狘qĞ%}cGYgSP"(bH)0 LJ RNYc(bP( GPL\r,\Рl*XAuec,"4Φh&Ζ)bKo7kh z4by=xyKxg#28~ iv|vu#t7ū잕HK6XO[QxTMQB@@5zcD+,f! 5E &F M54%AJB1k>BIYQt5TEly\⬯@kl!~׷;4sY0q388^fbw^ګM-!R+J*(eM^ 9ۉ >+ |RXpSϞ!Plf@gT @4]V/YF4^ H.3DP^ 2 5㿨$4 *dDz5q6>-k Z]%-q^bOnҸnn^7bEΨ!@^ZYhdyQي-e4ms54O[q{A#h_oWff2^Ltq/;YVCD`; ԥT' ,4sXͮƽq/j܋B+]t1B@Qh\1lJ1ͱI ;;`RDb&-B( m *2ǢRrr ]Zf<[}BH;إ;Mxf^l3g]%]@934[}W8Kt臗X8;%tIl?PϷY[F]ޣ\!B"hà@/Hi"ԙA")J& a^%:T^i]ye#wtS蔾x8}uWpQe%U ܴVٝ`TrZF%Sd]SE \P"$-k@ M6D9*!8TbXQ3,e|\Yf>ew. |_fvV>nEqZ-%t+f%O P"RɔMJNA]OYJdI]!*) o"/MOF8*!)|)]&+ 6R-c3qJ3[lfj Cc[Ym"`ӫ7D|gnYhr3yBz4Lr]4%WeP|> MKUVD*x@TRMnC%{))֡d s;CQ2[0ηFqǴB1j7;Zm-ZV`7r6Gx:#t!_c)zdV\b1xrb fѨc[CYeB2pd 1Q1$Iuvvf]{lvj=3|a:CIpW~TΪُwMYnV|į>s 1liтIh:^XtWsSXR<L <2["N#L; ;d(kJ&HV,A2t"ꛙg YHbȘKLTJ{(E@쾅/k'}k[~K/_"i9dJ4 (yB,u;B?:QT|%%ЇM/QjR$+Pmoߦ\a˚{tI6&w9KO\-JUko :~1Tqs:AVþNP{8<hja)Cdzys/%ə7-@,)͛º 4/ɻO>BoN .U\1zJi-*%+_z*hlvQvmw..f_;$,7tbOICs!]wͩ tf'B($#S%ZpZHl5u(HZԓmK> Ps2,N> #|dKYx]K,Έl&Ζqۥ5#)pՊJ2ĎaЙx"&zP!k1wMxk[j# UB$WW j4Ȓȥ0W:H$Z +)!,K1722xo^7%V*^sFtyd̳ұFȊYDxu/\}'TEyg^Edi_^[Y*8,?L #HA k:!5e(1#hgeT*r3liEr Jb H"mM+*3JɔG0!+@3"U߆ 0.lưc5N`P;ZM @5(cBaۺA:W' [YFG vb1Gyx+bƟhFk3',}WgPORڷ˱yL#t{EHGm z:myoG]tM1qbٵg[UZ_{gl~*0[#\ !6l6:܎sd0>xU O Ħfi k[Q1;:Y~b<>`źG7ܶ90>9+7í.:M}5R8luRy#aX}5Ƀ2fX?>y(<'~=l8}<O?9z?}wN*yiF^6,ߟ$'Iyg4[5-hZ6ږ]ꣂ^ۓ!.^~;kbi񨁖Us5k_aY?wKRP?PoBz[ٌKaN7mF>g]~ ?&>#ɽ=H3b&e&DRGdqQhկq%gp !)G:&#aa^{~GÞ:/,EUB^%YAd) JI~AN$O.*d9cm aֶO8Dagx8c^kXuHyϬ(O1'Njk34O(&9<ɂ EtZA JW#0p9m?{.}rA ӳո_d?k'[o7[sDtگЯ_nM)F}= B2Rk0@Vx5xK̩cOL` :̸r>˃=Lx~5Ƨo@4Uun(y ?z{,y;\hS;*u4j- XG ]Yhb{@ZPmن LzȞn@,7BwN"P+ikL;>kʴIIo"eZ:eU'>!i:.VkQ >%N.DDRW(JRԔmRZO,HM+IPH(>fqELJ6v5go4!~̪b╩06QY#sL"dPe)ւ$ PBn{d4[T3ٻ6r$+y˔K&yy݉ǎ \eeR-v#7Q$eRER*w[MP 8'dR*D0i!%YOxGN">H-8 O'*P&ٱ41 BT|ٿzHG"$H-r:l|^mWc-~뫑=I%ΰbP4H_] j`m55z&]UibҬx[7և7WT[%^7R \ϮK9z'v!8B VK5%_Inձqύ ;J@R^ZA6MbB#,=A,e=i?i5EiL<o<<xyPW#T^%U"/C h"[BdIUM(K2HQJNNG(PzO0+(N%!VJS]" Q,2%mYY(A%(KC <;b5WP˒=a7`nz+of9#[t1_#IPY/0el )uبZ$ifi׫z8خݭfP$mR 6n,>^/h_/./2m@C v{C#5ԬnkϷwnݿZG~ Q 2߯y}:BT|E)3%*ghMf}Nu>T/1`ӕ_~?SʈG_;?rIאfmy?Wtw4 i>b}8rv#=ZBΗ͗Oy=f\^Y1#w)ϻp>ߜxA뫞7O4פ݌wb7_ -&jx}=rM8aijNۭ`vƴ{mVr~guєB{:yB!"DО,5CJ/FG1)Qz Ŧifr\UCZxUCRg oxƭޓ$Cp1^ "I!j!*UL8I%zM@XκQ4{lxjnFLu3lwp?i"wx%rk?}o6%ϟA58/!/`Tk䣴M祏s^ı4W:0ݣnA ^vYB  + U'LS!Zp}S'>v*]px#Bg]]1&&:)MJU^8l'AjRLoeG\䨉'Fղ |gˏ= z3fcCkam?(,Ke +*Hc"^Rm?!O+WnTi+9WY }FeUUIU"2d(iHlwYaA,mJZ01&^-F=s6 I+7d7w2ş^nN} _bDQs&Ht:#i$V,!fc}6‘I9dMwl]qۗ,ӻޢHyR: ':b{vԩNkovRG:4S-NyM[o'u[va\̱Eqq!:)"݁ >N;(K{&}Mқ06a(+2UghorI1 Es0g `QqѬvrmRA{IP1H>Zue2sG덿r1fõ)R{H Z<?O_v+p}i2I 2v?Tf:"UvGV8Z%1LsRO.N>JM1E&/m0:f7IEHdRl(HSC>'TEl$KN{ؠZs5Ύ\fB$oL9HgtIaVHJ5 %ECiGw*`JHBM 5N g4lŌ4WQ *1bcCm֝ 5?/ʬIʙmoͺ4AdxbFc_DQN8!c3>H<Utu NA8%AE"bK 2VˑX赎@KW:YPLH!dbHZJhqcDl֝|<ԁqq٭kL{fɾ㢚pqӵ.BP@);pAd7 FJ*h-izcX1~xx)uԯOD||fg W~zՏ/nZsg%k @4(,/,?v |XA>/BBy![!^oH{'dR** Uɮ|3Ǚ*G$aAL2C1*PA'F(M,R]ZA"3s<,) y_kY7ΎWjU=؟{^o(͆ńu{zwN%<c*" KC@rJ=+ŧl*͐Ύ?`+knHC/1M}0B3rUXvÎQ'(}E4 )6(/+2!pZFt%+$IV`,!IAD'IGYUzl%n[zHVLmB΃,L7e14gɥm"0PVZEHnB9WރfP&Xћ2QIMT:g&kgp('psRd ܃eƄJi$Kݬ]O4`JaFY; AK(Fqo9A vE趒]B;Lkh=ˎۛB&?I9EwȶdgVS]0j5qPϦ-Z3ZbJ`?2Mܾ:)48 s&M)t"2t͠uN^Bv̙PCTj@Drl#`DC`5#RHat)TG)򎹀i)rE؄ A2 ǨU1k=Jy>%5k1j-!3i!O \**f5o?YEJ€BmvӆiYN Jg6~ /V$h,)h£#*j9j+:ǩ1b|aÅڮ +*#FǤ"9`LEYJb,#FD@x܋ ʏłJH7BJުvY屙uD9KaX,wt9-)C QK Q|կRM0&PbvibHqXE_٨sGuHa]F)[31zG80h%wTbʭf8τV,X!Uny"EfV'ƅ?e_le8vf6c+>59mJO;8KOS7gٯ?~.L~ Y7AV' h(p`̈}ʊtas43I ! B\HA)L&O&SW'a^(*:mq )R*.Ra-l9mΫJiѴF86>iQ`KϮM!D'i{[UWHpu)\RXtz i*+>&~ z2޽9èm7UOgik ,~)|(n^V/5P1W(R=ۅ͑l4t=,-Cڞ_US7A0q!`(&-> 끞\ >N^ kM6}+HYjIsX>:o'!,S|q?4 |Qkⷲ%,?8:wO?;~8sL?h?LpHڛ_"=xU[]CuT-&ͯ|~EC!-ف܂(tǻn0q)>Uxb;+"v>l8a; >Xm|ya#EcVg e br8{BYU-Pnoök-:CJJ#68dٌ 9(1aa$VBbi ("LY+MVjTWE (0ĦIQpwA)qp7ܲ75r(UݮÁhx-QQ0H@c#eTkBeا\pF+8h9k4C%Nyqk"F##KL8'{kG= Kc139a)^* dTaK XERN "\B6xјI&r^[xN*w櫮xH43#ZZZ)&2=fI(7m{݄26G~GWe5hk~3^{!^^'R 9W.bۈrmCȂ#p~$"@‹[NZv f] <-h8$0QOEm[v8`'^jC'z'R1>Dn $J#gFȕ11)0*bb|rʡry2 3'Dn\(U|S9&+>8UꩌVۯ_ͰukbIAk?ҽQV\x[cyM[NVͣu>g7oZAM-jU.eR*Zۭ/R2>4^^ە۵fU45M&ht|s_x:.vCC m7M3,n|ZWZIc=OKFH߶R|cFP_4CۇDqϛ"#c ?79ݘ_C#PW83844\߆b3hR}J 3wy`*s1LwE9/GonMis}io_+i<0In4`TlǗ,#3~;ZLrID2~CG9_uX4V˂hcw߽uZsfS>Իwk119FR̐8Q8i6Ha.L+nE QE :k OfLp"B"iuy+4syz];UŧcOmT!{nv^Vu8l OJqNKk⭟Y7j<ΒK@( s$0yOFpoNnTԛ1s%ØsUHUqDǫ∮|ޡ8ӴUpNc=+HHTGQxL2P[R:WI>!zw8P(J>D[(򠯌 %8MLr47wp)BSϘ8P=x:T6ݟXl\f{e҄}!mȆ|vwFKtD8ɲa */їnԀ9U}X]t6nIϐ+8o#I:m@f/o'ZugKz+Kb;հ-z]=;Kg UǠ`j"=I_ ʓ+VCzMұy&%.`&n# .7{ y clp{#gߏ?|hd?*| 4((eE ̂ =D 9+ݛT..$!tƒ{8zt 6?[6rgDedC%u\^*~[(׬luDwR";ۛѤgAI_E{wmI_!0R?_=MH:.kaTHʎrQ>D$%pfꪯieQw5/E>_ ^q!x4'Q NY@htU]lJHÒQZ-ev>`^ƫO3>dShbwF"cIpF`<%K ʶ5bIबl 4e}Y$S)|-j,P >"bcOX9|Ojj$$o &X}cP eJ!^~X}uk|^ak;nMv0 Y0s""h>s2]w3OIL`K)2D!8,Fg2'I'w 쪉cݼ!:L^K5'@~BC]\}轿l!s&@gtѧ@I&'c\%M6>KwL?.3~Iim |zyq:1`45&]3!s`plS3k9%NJ1&CoX*7NM&c$Axڗ}~J%I9;3?e-cʞZf/Y$_"圆z.F!E+G :A-^v"ALΓ>v:[f"3.Fu>ԷWig\ rv3;"inqi]o(_ZmWtA3gjP63q9ZL~Ҡ/ƒt4cMI4 YlQР«DW]˨RO^osy Uω掠#c.1ypA?ap.jfNnb>U=?}ΕrE<8iƩT/b_`xF3 do}>` PLe#lY@M%JU.f 5\~"'`&HAzΌG#R0'1˓O }}4so歽نN-Ax윝ɒ>Nka >*h! 'h5xИ;o/ʱY3gP W1trkHJZn~ͬͬkfs%V]ëqWE\]i*R*ѹ] e] 5k_"i9;xwU䝻*ݕSJ_+`tZ^hu ?ߔow䦘yr|8d0dcvDW1@~5ğFޟ˛=15<ěLIYjFyt;?t$ǠSV,=vO;.);vˎݲc-;vˎݲc-L.M3m *7mfUͻQ.`#`AK|})R>zჩgK)Z ө5ٽeȤqZ).8,69td5q&ʘ(%eO Cdr J:&,DxE(&Ύ%Koz@++ BE>64╚u˶ûuˡ^t y-7/iY^rhYcAdws.FYY{qKvVn9i]\Njܚŗ wYꈂfaaXaM6l(tpn"'CHZbex=["=M4+ayicݩ&Hhzz:A"LseL}B ܋N_D'WC] ; 02'եԓ8MAȎ!EzɐruTאYe {uĘ5*E.`pZpת"ݿ*q ts_IK4_tur1M4{gg)cz1=OxՎ[n>a1w\y3?&?<.]'Andpmo0pYC2лJVr>G]֔ ] &.ѩRrG~2Iapu41Ue(^qR+zn}Fz͇ލ{ݛ^Rkkȏ큷ӳV=HzKaaOfVg Gß/m;Rغ$ #FFUeyOFp\>_ W=ޟn ý yMuݽ*ΌX:)2"hҗ^qFs9ɚj҉.qF'rG'߿˿ߞx'\ؓwoi=iBVh(8 [dtηo 9p'')`ϚXT6+EgUĹeW[V,O9 x} ;ݴ )[ :gP)@7›{m߼L|䕥|f@bZg=4n`0Tght< MA4,ӿA# 3".71՘j̳E56 2KSJ!2Y,:2YŠc2fg:^I8(OB &2pix!Yk1 F&sm`TM:$gC[ẍ́4b EzgOLT`ƾ AOLlmVd0i>*>|eo@gxñ׶C?_'W<]$c%W CEq|t( Yg=7A`@W>\ٛu׸ux_c:@p$J͋3^$!y,1\uY'RD% ƍ T58Qڹ& Vϊb9&d$FH%`*Pi XPPEK ?ߢ>^jzgu{=K3|c~8W qz(87h$#P<ƜbR!n}7lPeb %Pôus*OYrA'EL.S97)j'>3*|a5ʾuI}ፍYܑ+~r5V?  F @ZeB!Q , K\(DTw5 J`2!EqR4D#\4b;f]81e^%vFøbLr-kKVkϯyL)Gs>PsbD42ktd!֊}Vx Vq!Q&C&&Jc*k̇UdHAu2F۠~QUc_+{Dy#n +0FdI0XIsQ Y,=< emzDg9cI.ke4 r3!e{)I 0(f&g^u 8Yl2vHjZ_b7/׺C &˵ %*&.0,.-4J:ŧiǾ*=ww}Vq'W EH~|G%~sseԜj^ƀFklN8 !נ&ٔr4f҇YW4{m۞g|dLLR9q FA( ek c>dk]Ys#9r+ ?Ƒ $:b_5F*CeinfLlWhECպQj6Th{Tu]mKivr+[Q3-o9*UD+s0GmO55nyԈ<11Ʀ)~soV -/Y&4GZyx+X.F1Q Y)aOr(.y" m-|b N[ɘ()g"2 Jz`p6_\N5SS_s3  ),)* [[Ȱm2PK0%4v QA$)KS֥}(KUEdu`DN">H-LrJ{]3XGK!J*>ARCT#rS9I2^4O;ratqڍ$c.ST$rd},R:ڊq&C }tIub]dlD2vc]4CQӶg<;Fn-B!@?_!N[y@`2M*lTӡ=8)"B fMea =$ckfI"4lQuz>0 Nnp17 ;g<,lk]tU=wm+j8iGҎ+v9Gl<!)MGo:U(̤ iw *^q{ӫOz2nǺ >{`=tn G]:HkQOOFvIqH!AJe)kS>rҘtcM7rNM^v$nyޛN8^M];Vn%T^%9!, % O+ %>01^2a"(N%A:d!4 Lbɐ)i˟HChD0H%3xAu)`M>.guG[r_0 =nu(F^`(SF3PydbvE嘙w8bೝg_^zzgbyrjt H%Pt^"@h> fD'% U|Y]Cr& z8>9ZR:k5H:rN(gЈN*,Ґmd&LGh9_ CբGf7?P2-:@-@/ AUW=PNRk Q #&V$+SWF岔LR)KaL4=ayzAnu pgV4>HCȵ4;"u|YUmU3 R̿Ϳݮ FWl4@/.|̏!7onrQk=^m2Svs+A27;=jQo\/Ϗ ^Fxqf>*$됪wHw7dj~}Y 2nqg ƾVދJâs+|50#\4iȃ<7B;=Bm3F|K)T6 vS8ȧю틌M7/Nu>gF樓=z Qڏap5in,տ_j”Ww‹Ywg+s=LD+s{}|1^']VlͽpϐV-oS.{^ח+kf»F/n n Clvuލ{戚 Yz#RA{T #k>ZLKoM(6M^AqE iE tt6AާmU$Mp1^ "I!j!jQq 19JFlj!c-׋t4lmx=iƫvʛP/=Wݹӽ?iʖ>P ~JK/V)t.ӟt#bW Q#*)mF^Rڌ-GTR}j"kd}DV<&mYEQm5ZMbR[cĀ"2l6ϑɽcөx*wF@s.s`@)x1*6 N ;&cq "|OQKF/̩$K;MB`,=DeۦP , V@ZXhb` HΊBքJR* zBl!FƮA.'aه^CcdHD9jG)FiR,*[JLAP \xտӱ -O/揪M9M)pX7]Ͱtǽsr1Z3`vZ3\#3co(SK#l&d/5i|H|Qw:%sb>ѩd JQ.`p]QBl[j{_(c[ںgڥ$p% E}}WـEW ɣ5I:x'.A,ӥ%-:8! jȲ1kn0P̖|,-7gg󪴦I>]uUfZOyi|_m_οϯ{L?]qe7d=O[jH9=??Tf{0)wݒ^3mo+Wm;d25Uth= 9x{ Gu"`vZ߲fD=@0o6Q%G\2b했2brVXDrY Y-Z& egZFJkQ;(}q)P F%jUZ ;忕5V2t@`p^?n`0*!ञչw{E(;9>_>e}y>U>&@*$w}|{y?@TYi鐙Z䣴M~M(uDQ&[;5^[ߕ;lc@V5qgp>0k> aتC( K[BArSAaJTӡVlj&M .>i:~I_#_A760[@l(z>N&́L+6 oUߪpɶhcߪ(%L[Ge,)wƱn]>ܬWK[54i|L'g_~r~us)[ |1& O4!֢-tph+Zcg銒pb#di, 4CW.VArtUQʉIp!bpm3605bUE&ql8ڕr`vp9,]maj;k~QЕقDWN$5DW Ẍ́X5}0F:'moxmy|}U~:=[>X.Ÿ0.=%Zj̿@n29ybg/N}y+9|ո+}hvpvhjnҍ,ތ+jשV ULb{/9]UNW%DWGHW!bRf ]A骢v#+m)l*`g+n*ZNW%0- W7CW-ڱUEi':FBBI-v|Wm֩FLtutUOЌ,<'#>)Ia v -4U2`okOa{@ L{/ڰF4L5nՒA4njZiŎVizlVl\$7oG[sK,"_ίK3dUdRk Q #xYU6 Y"]o#9rW#rGe`w`Kn|ٛϱ2SzXղdVjbh=N&RJHi J(0 +y>M]%0jrMK.7J󟑹 c"Ύ̭<̥nގ2$DxtsÌ;Uz;vv‘v\.;> se0W5W=}k\!Ú1W]&\en\JњwhZ32WȰ烮2Lv\Jٚwh8ef*k5;8:vTfěxe|Eގx{; eiYdx?(ogR߈QE/ -) %WX]#Nd򘍆~u%>b?5Qۜ>[+xc6k_heA8o;?9z\l/Lt@J~|Ο%@168g2s» 4Q,&vH{uaBl q\EcYwDK~ءCwg\4aaq;VmQHې' kEلZL vk{=I8ك>_gUu?6F%i MD^;2.C<@ʶG sviNc2#ؾOSٟU$U`!YgNƏzۥplJhUbJ*$s@;lNKlD͙ݠ} -c}wo{$*EoeB^yL@'+76 #+P(Acu;Eefd2&k`dJk vNc41R91+;d&JyZ]n[>=` K>^#_ًv/|@oN9*ʜAIݮΔ5''i&wW}n59߿'z ~͌1OwQ Wi~?Gu합crj}eTf$:p|x|[fKQӿn#%e&4P {qUWAl~ l1;7t0TU&3.|+/.wy^lUA TBZi(&%gUڙHIZ*0XG5Z DZS CB-MER L#U[]QyY=7H'X#P&V;e8-?ǟzu%#Ł-1I5 in9'H/~6b5 l˲L BG/.W'A66Y/ mނ,_݉..p4ͬyN<םA5EOח|_ y9Y"O]GTK9Az(@,'yT+/2I@$rX4-i3`)kLH`~XP2#pGhk1 08rᯏդtJXgxp֠Dyڭ$~utgW6}?6GSy qFe j RP 5!/ $HR$ ZX,S1n Ck`dX0Ρ"(p<.4A!JBC PŌ э qC 4䒎&(ř@L PT911xΒj=k04!4\kV[Nw5|Cz_"ׯjVSlXO&k{=Gx>`K kgp_Q/#@0epI\4Lr|ԨTV }4E@2g`4lTi GКZZfybR,*`r#G $zCTs􏜬-hky<[ru1%5A GȖ1 \ dAm a<$=Iz;!GԜ ڐAZߘ֟d(}ТFg7M ^cj+PԴgs3 sodCx #x% QC-ߺszkƷ7Պevfy_ܺga9/+Z]JӢ7̹+gN^Lr=vԭPfoM ?` t RJB'TYxfyj;v km:bt9k F"M\!xvՓ !R#AN^8A&5S杗[ӶRTIFgM VK2QD (?TFL "e$C Jk&DJdą!5 .iٰ4ζ r%d;WDHgAP54iv|6qHz&QQ])itUC(ˉׂUm-أE(YPYILT\р IqD1TSJi * 5\A¿*#)ÔD4F )? U8nMx )0 9E"\8=`ma?jKŤ &_7S S-,-"T'p™"1Tm}GהܤsY+Xl|N9?Y_A ϗࣅG hbYi`pP]x,2N@Kͥ(jVд "멢@Q<L@C51Xx\4N-:2 |TkTiQ\7fdQ)1@`*)cET@^XgUԊ\Ƕ`MJhN4)!H2 / `\0ATQh渥DYW>@6SqIIȇ<-<92myU-Ǐ}(zz&jLjI͆ipR2\H@sK?>5Vע*Q%l_52J1hG`C'/it>^)ưEJ؆51v^8 h8b{g_mNѺj1gyRP>TG6*ؖ%,9H -ct/].Y`8l2X`g O[YH=WqKܶđ"UXE2%X$<F`F;/.vmR$v]9C"e)PcE5W*w՚8mm$Nx_%CIXa|B ;ς)\`:@)d]H]פm| e_9!z7ëuW=t7 p=G~6A`7xq*x|*@>z}u>!yr$X*=NBLPW23CrZ'\[$)Gl,~E2xZLe껷P/6ޘxd.dP-+zi76d\+Z:KW^٬lYpOYc[hn,_]WnjE޴muœq+li~lw)<vA(^ߺSKq j&0kΪ?t)BM׾%np۟׆%Nn&l,mdic 9aJk+>MY_/n'G~1I^IS(>6ԳW6ӇIrK-A&SZE})3)X+ߨ;'{GN|?#V yDDXdR$&1Y*#?dϣp`SHނl ùԨqjw,`Nqj^:P#e1Q# E? ?h>A\kTJ[yQ⑖"!Z';&c`;xxWRq[E-vQ"89rb9E@ʂriZ,J!$HNl#SZcƠSbTr lBj[g32UZzƮXhZBab:3EOn*4N҇OrNޣP:u4IG%Z2I]DE*4%JTry`3deFs \*/IvDD02WՈٌn8K*澠vkcG]CTY#UAνM)gV}򭉳+~xG^Go슈eD"v;;X9øwFrFIG8h(I*"M *F$AKg\p^('GMVQ9mˈؚ8"v}uKvE2.;\\\kt p>j*J !0HE (:8$ R[!;n}kq#A,U9uޏ~ k5X2j}|vK&()|y>}Fa+A8/JXx?XOHPuv1+'sGHw)8B$3 3Ϩ'Y(R'g&JA")cL%`JIu6.E{Ͻ*rUfpozCQoo׏^N=։*bZ8c Q*G`qx*H\@D 91řęqT9* Y@o{[klsտg7) sK2CV/MepBepS֣=t*ܬWXWuJqNotW]\YҖZ(!4BIjY_/55m3.z$%6aNaq&%Cw"YǁD<ѓn ;vr6ȿE; *$yHB1cZ3Z4lfBlӢeӌss__Y|R'ſꟹ?`'c]FSY'9q.ϱqX #8P#83q}4lp2dy8vtEΕrdxwG Qrzn$Z^qz9\PWuۭ*,8d)ihC Sǟ]qe+^ٷ*:9YXJ.~/8LG~C ǑyFk'.قF1PJWٴ_+3?V']O./ΗQ si Rw~Q-7]2 ۏ@HSIKpIMŰbDs;G!>zӜvAnEd]$m,*TjS]%Kg+~\G_c1LEczDam_</zt{Wpt?~|w}(g'lI8)5,ߟ$'E?E붊fEs [X666PÇ>/Rf{~>V ‡go^j#/,fz'%0a~Y' ?}D%_KB-: hv siKB͎I6d\fMMבFd" #\@>B{FB)FP[I5St:KfWV}48ydDŇ3,&PHY2hP݈ '@D=HJH ^nM5q\_MĎʼnhw8yBT]Wvcz{;Ъmuvϖ/竤ձb:`X+>䢸Tش;>ؑw6#칳ϐnoY͓<Y˵ΙmZrl`;c$: SQD:cLV8!1r?]bxxDqPLcH` Ԁr HBW&*ʴW1q4ܯ;/e,d΋3́i_1xPA$FsH: OSIvWN"6E_t^T6Jb ыn2x0vMkSy&4Z@}`pbW1J͒JXSUΝ[u6\'!Swh8/ =x)E+k9+ξD}BTPߘJs@P`Chz(p2fPJnLW \0WY`P\u8*KKžUAW_ \իg$oL6\=N` \=N\ثգTWIɁ\1=\vp˫+- PZ!J\};p@}Hp2+WBiW(^vpg¢UTV9yN7Wv4OVޘo\ZMYfՊNܞ^Z&SBJZGgX1?2]pop8ٮݾ{̹swo>؉On{J2u/UWÛ8^+̆>0@U*U$ h`6RJSDϴ6I^K͹^eÔWq5C^]1a(4v{2= Lc  -Dܒoa)dYRdE*O]_o"z<|^A󨹛v&UJWJF6*05[Nm|6aAܦP?>T'd?>|6.(YjUC\K:ch8M$S׉k;UAhs,DQL;]qR{1it#'c(j0 "h .kڽ`\ю4ɺVDGWl680p.r*:rCQ.%#ݶ)gZVytT,61II5JN:Y9-KEIssFC3ZDSc(8jl./4͡)1h$g*E I E_ZiKdz4t50lj_ &y4@"L:]PTr๨GY2F:vzϸw\^ċԠ0(9(s_i媴Zؚ87 VS/|BI==r*.&zq|]2vYۘɻ l!s KcT*^FQT|rPLͤAIcpyy%ygͥ=̖l5M3a^ȄZ  QZ0AiÏ#+d0P<1o50tN;눇ͥ-7 tى؇cg]:{m+KGe+ A*P/Iaɡ=h|sPSTX b!ˊB%‰Ё.( J8ѠS)шPUMɨ6FkB^X2eA G󸾰8my_cHtQ"ou1&n2+ LqڮxhlDN!4`@[S:FЂC.doxD ^#a_"yšt%a4^*Re nM5x# x<M62W_]L.f1weBvWtٚ \:Z.[SlMR}s Ǔ6nQg8deX!0HY0gE`0R9;Gב]]'F?k}*eiUR4hVγH$SAZ-+F 3/\p*bjL9.q9Bń!F/m$h*x2mm&ΞޓQE⅚V^XP&#&iBYe|ump#6sn=9h5'o8Lfqt}1T i Y>m:u4d|>b29}nNMzf+\_6Gff]FڄeȤ*;A+=Obs?I~ߛ<BAsj2!3USm4atS ܀d}Wp3r $'3_$_ZbR4B|8+dP_P7ِ|uS9Б,ϛZYln:-Ne8Qy53{KEotY&a hLv߫yZ&5xsp9o67LU]W:%חK֞U(!?[ȝMF ) ZGG6_,%ҌJ7٬i8*]Fl*Jlݿ]{W>w=QJz׷xDyZ̽[g_}4++㻜/Z*<`j6Q~`WַikO/}Zrog7iӂdaޕĠdz´K(Bs^[wTm%V^sԋb?w-qt,@LW_jWsͷΓ*%)-%ThMABk7, VEu$oSDsKj"QA )ȭ9zNF&TL;šLYvӮ}IާdϥbPU G_Q+[z@h+W:}pp3!. J3I(ijUSy ;Y!,iD(]5y${>r>ýJQuMY4k };U&i^7+^]kkɏ̓oޮ (10s~ˠq99?L*4;m-2ll 斮5#7whu#My8j֧9 ܵٿOwuΈJE'ZciuRy#aM篣1֪o&7*l3 }RU}I}_~|?>>~v?L3V"'' koozDӮEMKfO.oiH6l[n@JtG"WvsQZvOG '+$6\yTR*nUj'rَB <^A޷ ^HbNH1 =T7C,b0< M*D}g#=aF{~?E36}{e . 5He.` (C"s#K#Lv R"(؝r3ӛ'N߭36' |o/_ztң67|;_tFlU]˄HZ ӼS[h™R*20Ra3 <:7N^/a|;wONPou02:ͶCZ/@Xs E[)e&A>کWą6F$.U\^KJ~I\%qhO鏜%{x6>$0mm``g dZk댵o3:$ꄒHb#:Y5^GfSL)JȪ KvrHn ,'d٪'_kyH=>^W6[^C 2K%Ru9Œ8A))sI2*$'4mUQ3pe. i@uqE_A J NZeEmM5_Ydz6O!]w=u,mS!X #%ͬXa\ݐ-)i_KRQ zrm*.sR6`g }a{idW {ӌCn“bO3HP#,̺L\?gF#MëH(B$Ш`42R0 F$)(U~_*ȏ{>IBJVTʵJlN+]FgJEDK} V *1B)(-ꂹ(Sإu ) %2P& ׳gݛ8;&BNt3zDqxǃו-ul*h8}!pHTڦ|)feGeXK$TmW^w`]BDۻwy&gjϭ>UØ \w[bWic͋ϛy5걑E.@. Ґ`Ϯ;r=e\'μ u';HC\3:bZRUjuD7[ c!8UD%$Rg§$ 5(.'K'RD JʙHhhId9,6ZOsгڛ8;$/uzSxe6D*$uT5Zf4xk vΔUH%˨߈u d)o֥}(KZˠS/J"m} J䞪~OnXKR@ d !QDI$[/NN:sJy]ͩg_q؍\(yIJ(ΰbX/]hBJ5z;#9a6x[+ξ͇Aq "Yӷ\~A).6Z_ޕ>יyQ&d TIg@y,r$|q $B fEea Ȑ5u<۵dy/,Ӟ7=q~wq͒v~攌Mnm<ЙAY4xVB腼#c֯+3 _MW*+sVs\j2x^A0v=Smp\rء-4av`Z ?~7N&~@#H@d}}kcsc[)Hhx%DY@h6>H#ޜTA|W&FQb)G[x1Ԛ֤ݔ;Xu؞>Z^}~, Wf>c0?J2y Ή=ekl 5'պ QdTD)99€=!B0 (N& N*Np(HIYtV qHa@(H~'6՛enh-vۇYTuH6 ^[=s{$) 46QLlT(]9f,N 1L=3 M_۴њ/m^ަ-nk{]xuXB?>l~ 4v.u!B Fm3۴=&f_\N>jrR/'kvq"NŜ, Tv~]Hni^Cv[`x9=Ǟ~9r)ImG.ni5ƣ;A?_%Ӱ6{Mwv=x9Ggu}[j"_qz25[IndIͧ<GW'aqmhq:_6=1kNwK^^8uNXڲIZ\z?6IbĊМү[&Z.&l[w=1_;BqH=zz?w WCv)Dr\F`4 齼W|SDY\oxN{z|}_չgq\2 QX Xت-c aLQ2* U 岙OBpڛnHp^b=V3= ЛezY_yMA=^F3?Naj]B?iޏUd;82&7y]-iDԮLq7Fn,]1kuzTƫ?4Z44wۛr}\df~fRα4JU{n<5>\g؋).8 ZNh\u_~_?%#iM, FtS2LG,;e'V9JK珓q:yj+=COʕ!zeJYMd$z}"aY]p Oݠ'p(W|#X@tG-'fOlʪ`к"Tuq>뾤(aK7& ^M?*vfԹd)Q 4b?MrJ6^ \Uq嫁*WUJzp嬑^\ U׼:}pR" W{A~;p~Ҫ"WO5 +`CHzMpK^ \Uq嫁*-s* \IY*}=UK;\U) •R$9L.jՖi:YUK][>}vf]6uﻰ/]|Bsp6mU ZVm䰇??A_g`C E>:?M&KfRl讓/sO[ zѥCEa~wP-2g*mQ{3^t}cIӢwH~tI%߰Z@`eW:vRL;HRidk0;QV?$E$HKP28}Q-2+Ry"ւI*wىhC~=YO9~ Bb=#ʏGψ_gLsvO$q&c/7=H*u:Ldž=9}rFD€ʐ6䳌I:pƒHmd>3FLة:q!h#̎sDR/!ԭEUC!HО|bZ˘B!PL|0 'H186֐䕋Kv2G|kmOa^H:ZןG#i{Ol,q9A[& w{i- DK{eH2"`::I`CPr hU&Jl֭T/愩xA 3/\ Njr9V)@0 mS5@6>xd{Uj@LA }P‰%D3Ig.DG?9BUA0̊ =gcyaNcUq}ƐڹϮ'A%]J1!qL2G8jDtWՆqŶ72uY^ED*D}3 LIlIlH\sǚo ~0 xu}aC~!m:%nݴv \;ϟͩ op.+\O|L1-5&MH3!s`p:>d%844B`&\`,ؽ;zPL-X^wA>ro| ~:S4J1!oX x$0Qtrn1G(ww9{ 'FPfznwt%4L^ҷ\g5ܑ;ҷ mw![xؐ.;v"}t.lV￿GKY^{Ne+m ڳ߸֡W]JZ'K6̅m =Ϳ<̞<-~0l_ͮAz'}ûߦߣ'Cqlnovݜy2%3볏hx9ʬ$\bGMYzsz_FS{jxa~B6kpC=yzDE56cQ^ (c0$QVu׏/NO'㠱٘@-r% M H,3^+x ך[Gq|Qk8Dq\v60ϼ|_>N/fVܔCȔ).ǤJ%_&G/0 z%URc`RfG;BHH ґM RD)v~j٭K5i 96sT (sюLIl֠%ұtH u38L6ЙP&2i!" IHcD"Ѿ`J֔.c>J\-V oMO<˃os:?kЦRq|1k4H ;41FYʙc\2@}$D%MYi+Wd^X%>ضj%3l,'2YpnH#'EHZCדc'Pqy%'KRVʺW )(d8fڃDg$0.cĘ-{e4v?ִ"qC 9A%>;̉`u ++5 d"`@R$\&d"UC2G[Es:imb"G0A8pxDi8תfߍ%Yak&xT|cK?KI,:*N@}gOg] ¢TtvϿ\qoS8;'z1 BA!|<2Z|>m9l7Y G[oٛsMat|FjHDyۼ }{He$ɽ[.? Z$ql9I >*s`> i`xqѯw~fz1*0[3I ̥}8Σlm[`|5fڥ}G\Qԓ=Znn {RژW}*7WƣBw}/m|cmu1Mn̈em⧅_&Xi*8ojuGoiTLϬx$"sw>~| h4aɮqCǓ x]Z]S= O7X}~;2;)?8eb }wuy]2w'nלrB{ .>B&5 ΛFׇA ^΄X@K;t/@x$H29_&oJ~#yK9 -TRNs@xgD]NC9%0^]&JLp=Gzng5qg 6}wh^,5cp U"e0EvR&8ɐA", g:Ut4~n,_~j2q2q*mA)';'`{΃I5yND34m~NT ˬRӍ17g<6^okv>.v})hTKn=z!K{GCZ6'j $2mH^6A hD)=z ZvVٓҦտ$ au* S I:mI.,Г(b}^*ye&NN+9_BAֵi|nzPDسf)tJY"q,qc9EƖ#fAB+o坣oAwx6MwԷ!#ϗ6v{(Yo{~B8ͲzZm%MEVN7Zـg >e;hdaV4g5\<71jl*yYzrT 1Bg Б*DstɘxU`$₸G|Z08L( yZY`00kj ZSBHP8oZ=*mlz_4#A{BIm+J-I6ԒeԒeFzj2J'ԒajI4} uefSTo5M`t o865d,,թe%t4<U4 *sCq_xus 9YїQ>«L 1[A<'Su8I=gHBX\N!GV8RHD jFDXRf(ڹ& Vϊb9&$FH%fVnEM@,pףxCغw=u/\ח~UVyR4Y(cN1 ^W6r(21aeW2jkr9fM, ӓ61 O %UZzB]3ږpv[zX-&BW.GT6)xC=ˈ-NGş=hyӟbgD"MZe E!r%Yc*=<2-C!{> &CR)Rk.L܎YNgLOk gWB1Or-8jKVjK[nl&rŔ#qVčAzQJ9GG{2 L%D"5:kE]n6U!2@,e2dkbbPZSɿ|YED=6#bOWC-"lyo{MHBA&xpBbIHbȢ I XM(k[ič]ֆh@bgB h!9HHLZp@vjE:]lrjee(zǵ.ꐄrB!\}`%[1eXIovvTa58Be{#eT؆ډٻ6$Wڽ ~6`6ؽ\pAЯΒ)>=Fi1bNWwW=LuU9|Vev5:SBg4.?{$C$k'xpN3GԺдD!gԑSP8}Dp=JT wx1#$ρh];C -?Rk&Eͥ0 #F@beV`j8v|漎S=M}` 3!!6iuy.of'cLE苰Ju(Y#{Ѻ<8u斂S_9j?i1m|rVi}rV=;b!>^ BVm=B?@DJSna8 Z3U""dU|=q~.#mX/i ʼn@$UТ=eV ǧ* ښB :*U,0n E*l-iNJj `yMVU.IpUyhlqCg}.qc7^q$;>9Zab<~^/I׋U|y*^dL8XC%Wùxj [mcQk^s> HtW0 8 WLj@9QkS%2ZX!sHL C=|㼏9`=7L÷`fSmqwW~΀Dvu/#ɑrAXu95QlSU>{YS8T;83|kSزOl$N>#U2ѐd(.8J8'3 4I77 8zb{ocD"iDVe{t=+"\a؆`YN&ZwzZ c &DƎfqa[;9_ě}y |1]һr8[[5{(ƢL|a* +p *ٱ;߫Xtܲ}֗}O|qT~[f-ᨆt>nI)WYXVu=sZd^Iii?R2쬥]|5e wbIN'3t$] VCWߏSKPe^э7+ݛ4GˎGyXlCl.6'D!&WwhwCr:N+y1o1Qj!V=º7ơmơ'`,rc 4j ^7%oC77rnjC{ _.^O]\;O~(懖}%p߷嫾:>kaMCz7:!#]/nu(w$-~0io2g;8^϶b=_x3])ccG:b9@U{C͸ᚼwljT^fmI|@ŝugG+W&O3Ok_CO_txeeEsLHE+[}!!>&IEO]Fyxح,v,^n+-aŤ jmЂYQeLV-D+rRLdMʞz^TLٮM7ۖHqyc.הvW(_>O:~ڳ8}EGu{*{.']Ӿ W5M|q~%_߿n2//Ͱlpck '^I_j'Ѥ?oogkS_'(ۼ_&/Zl޽\X>v~z/ ?>T,I{t6A;~t~^?|W\ߟヤ摵auPaRZYɠ:ɬА{ KGUsq`uG_Z~]ltA#H*@XB:d/~Bz%i^.Ų|zk./jҿu~#X_Gu V6jd90X:W`O?NOh> V\{SʣhBԳ#|Ӡ{֫S=mO9 \5i:\AJ+!\y!'v \5q \5i:\5);7Wj˭W/V:&0';B=-\MZ&%Rw+un\5jZ/pդ}l_pjRz:3+%j U׻}+HUpJK"Op U \5iu=3+WWpe@i j7piyj%Y7a7í%Gwwَii/I1h>#X P:U\n?q%7 򇓓Y -/JهNB\-go1y8.gYjo[35CWT.W=VǗER8 jY+yy5C=>-8_ޮ'_)|Gaz!9^6$?.۬巕A&!6{*\DhkKF#CK&+AUd4V]ٮ{9:b$h}tK(CZ,۔QƅJYdIՉlCw[[Nb]-i5XTms5uq:6=P ǂ2^,,׊.>;BT2(] 6Wr*=h`j:xLF'aaCE :B/%%* < A&|Z&x^`e&J4O燅%= |QPVAkթ-<oŭ |tܨLt+Q'VCWawFR8-&[ae%D@w24}XXA7vYYyX~2$אEa}+xm`7= Rb8AbC8*ᒷK\F]ME\J&",C)906y"Zю:!wP Fd޵ql_yT`f~$y88%&rsUMbbQ2w"٬^]k^U#ՈT%lvwUc4gFZAmJC 'МHAA6.nmBgP@2,6QM)#=HZ;hwEd !ܠisWcD8 Q0@r",Ul9i=`y!fQ4YF%5@o^5 T*m-ދ*ʺ,RK0 *w R̤EB}!?gN6J `- \nC@`n nu6贸n˴y eù)%:P`0$ڢ+R7]pDL[-S ,Eahky.j cȪ"GCmmdOj~5}A _6N #`":3Fs衄<@nxВ}q!SbKhtw0R!; #B*|(Az|PȔA֍M`j+Ap trH)u6!3!v*ĶB!DE T( ĈOZwzր됪 le?(tD$@?e Y#9a\U8/ڔ?jh o:k Pٗ9pGZqJveH(̌pàe%3}΀=pem]9%a#*Sd%IJmC3ZR Э@2xxp4 j4fC1IpU.uADC,F,hV""H"8D%Pq9@( r^Bר"!w*;cgt0j]ojLv/n͖[7 0<e1t<޵[6IjkOex~>}i1݉TRpQ]49w¶u:X枯v?\[q+W\pUXd:lLN phbqN @ BN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"':S9k9'PkX@ wJyRB)z&9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r蘝@«~ jK|Қ=H8ֱwi͙!'1:f DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@:"'Ч[)0ڊN^0m[MUO?uRb$ͮ'@ڜqɶŘ>"i% 6>|PjAƥc0.c#+R\BW NW@,s3jp ]骡ܺ1ѕb6@?֫- 6?k34P[Z+i0yY|yuWIX[O{_ n|+]jۤUTb:6YDŽzJz7_wbyg(nzMmb۪jRIG;Dl0ňh-S[{JJ"?O𾿞 \".O_tIW`ya-{>'6 }JrYѝ Nt^91]sHrg?tM˶]LWI)#Z֛·\5',qFh1Ffjb,S s>}j44pS εf1Wf&骡Z':K=ufVluV|vP*It+M/OܗZB-FAWM]{ˤ ]5j4tꃧ8#+Ʉ 53UC+ܡUC)ѕj7w+ãɮ3C+\]!]i`CW 4cVC1U#+[\51К]%]9%;HK ?\SJݖϿn˕7rD4mf̍\1Ъ#iǝcs4GCW nh/14]!]yEj4]ъh+X誡=  DWR )M嶺+t\sv/]RS{Е"zlsa`#+ڍWWr/VC+t^]!]')ѹ{GCW K߉b?/c?[{$`>d\cBW ;xj#c$6 r.ʭ +PrDt xװUCUCI #]6^L@:tJ(:JrZ/ ҟ3s6Iwnhaf,Oﺜ̓ggw0w~'z{^a?c>'ZYhK|h٫95в"6]j;MwVm'Rq*l|u͘bju6?Z8).Lӗy&S})*,Mo_۷PZˀsye_JDE*~FXTsUmI]ߓos4Ft>7'v~]kow:˧_x3Earϒw3{p}8דy~ To_c{*'juğd+7`ٮgoٔ DYe0fSbLUaT!8nD#WD՛O78/ߣcn:|l!ât|8:Ř0_7sq6Pޚ )uH҇Y:j+`Gs-` `ᴶoz4lYhM;/jƠn"]ܞtQk`0Ba6~_{;޷7 ˋM9g;/ ̫Vǹ Rw>?ߦO*1]߷?n?\4D~ y( CE ({),',aK\,HxrgTSyF#HDR9D̃&PU8㾔1c{pOᢲ\Cu46 3pb8>K(U\wS).r]0~\.qƮ)A4Eȫx_]QReg y3X SnXW dU&BL(h)MFE> mǜBTz5b(83vM<,:ڒ )ݩlip֚­MxSjB2\X+SCj ):22C,\ K%]o,AT|8e>̥h㱌fDNHx_G2^+kEUƣ2EoNyhXv=!tx$ ' ʈqƌUE+}5lgbɩb.%(i ̈y(h^\tuHsEcyQ̋xxZM,BF&+ Qp \v9Zad*Y$>&^|^<>,:ˇj`>TgZ|\,~|p6pWk6^큹%t^#w=uTFoxMpv?[4 o}v[;KclY5FԽ`#d FtEvDSUϳRm#j6B> 38dr" U48R'@-Й$ FxFXjUqJDP%%U*K%L 8Y$BY37BcH|3g76D+mHo3^<"xfnh)q"ej !bQT٭TGfdVFqoV[RO/0.qRiL)agP!'&yAWB*sY,T"f }J  S"{>93u o&Mrf.ܱ)?dlOD]>ט'L<LןV}5 FҼulN(UʦD@z Ȣ&Nۢ,j,j$,j:%b }L<5φIC,P% sT(%c:Ug/\Dx8򐸊.rNgT;փJ!2=x!a2}V9+s|JW Mve<簫whܴqX8&MǺ 4/wgs;A^0޽b:8e‰ SY4VFyIvW`<DC Z >mH$CBvZ6e& tQgWqibJ:1̙t-)qɢbTd1+mZ ˳čm{N:EA0FXa l,'e*[gYù!@OZÇ'Ac^PgVYJ8"4}D553AшLcS+?/R[>ĝ$(ĜV2O/!;cPH3"1\bd#HO9qUR@;QkRQ)y( YUp|Q4dkJegE 6_GMo8{ }'_57/&[ ޔS~M.(ї<537ܞ:5{f0Le fp갿ILa#ѻ*j &ήٹRrdal2%xyCW-ZU9}dHy,!VrRw4oa^ICYA8:~>a*?|-D'y{z"iث`{] צ*>/^:)V|\HXa|]`o{ElgfuQjZf{qpuAO$>|?~_/?~ww?~G;ΟhiS8k$>Egp뿽#muM-Z6G=}m#\[+rk}77JLbq=O㼉6"@L}9,H3>^ryuM!u}DŽϮ;u4(,z:Br*$Θ#<dŮ4ψ`b%0I`t66,D^Ushp8\'MKL9f0EvYH@NIOs`%K4;iLgӛ ֚xbn mQD';'`ƠXչmG0Jy[><{<ϟSѳmy/ҦwLzEfTb["@Aw0?<[)KÃRB9d\rbp.if,?D:7|=d cpiWjR02>`QPG \ oiGHX^omZ һ}ӧ}_GGQ)o3+^v[<%XkPҍœ'P*6b}ߖk;%CW6ڙOtf!#!-H<Z(WOZ[HەeV^VA _i΄Щԁ-RKOW/V>( a5 j[28 #HQ T(*+=!T+J^r/~D9_B.Lsb/_:m">^.))Y3(9%eR,\r 9c#꘷杣w?M}אjK5l~s$F1ea@i [ UѼdrOVH%Iu>VH9P8P$Z^EH(7Ls&R*F`Rx \{í6<8IPi%`C 'qX|#.Xb%$ՈPr%"(Arh-Sp(QJ$<&[|GA๣x8O4Lr2$VQl*['Ȝޘl5wBRRud զ8;bBr¸ЇM&yɅR"$Yx`21+:HKv܂P7 X+cC6XV$\f&f5&UDS-xd$OS }C=97@̹ӄVYK Q,I[ ],C~GWm/7-hkOԙdԖi8Bt7/Tʆ:`Jinùx=U[[0U"JUh$Cp\YUJ4 e! J2mt`/ pPqoSVyu=RntIc݊.y'b;uz\ɾYu/wV\vǤc ӳ+_H1BS_L1"p)Jqh `b4u@K}mAwcѢ+ Py)J֢`i:ݭ}_q*Oﰾ*<̗4IZ%+D-Yƃf:EAu-t55MTy4U~A]>s^RY+QٝYY%#1Si}t|u5+5Oo3|G[] Q4h,<ϋkk-wԜ7,/]ÿ|{>ucd7Y{OqUybk1j-#9_y'?lV?]=PMvps'l? 7Y}̾\ĴǷoj\G4~{75k^6sݵvvwS”Zr[o74}[xE$ƊWy^.S*V{J 2 /~}kލ!ss 's[T/$͟nFkTcb&.L\\ ;(zp6ן9H zi. LR 0-"&h{@cLڏy-j2[&vR7Q>nk]ʊr_hÚYkjYM:~xRvGf]@Х!iG(h^҉Y#H~{IOե䴼$Z:*kӮF$QZ Kf!fjJΪvZl2vBWV`IU%mmm<*jSUIyzZq˪N~ 6=Kpd+a2O!-eޡL" WֈX1\eY#e'^Z:xCͼxGEéĔYB2 RЃ,z)!w UP uW{)MdX8:, od8D3%S)h\w:Cma sG2{ļ1u٥jܱ%`-GP:Oy)3UW!W B@O0h|ElڑnYzt2ƅvgkuHN(WvnWFI&8$$%_& %YC{#zlz,4pҖ];gfj)7] n/pԛse=!a zikGnnM~m~}oG ﯸs%o[xJw=I+D}1T}0fwmk a^#I&kJ*K>0!"H%|z223Giퟓ^ .f'誷G&~?ޭ[*G鯫G ?zzBk%[7M soŵ[,O`z 4Zb5>F#Sl& |c9;4a-3NRs%c 04&2\J5ŽeMA.xyٛh(E%zn#b NO!lm ͺe#`$i56qfMpչw0Yr&S.1t&;XjfRTeM=?Os{K5-^ܳ%Zڋ9Dd {[;TB !Rw7y虖4M8%N,8b(Sc2丰ye_n~ 8 `r{8Ǎ4%"Ɍ%h?"&Y*ܨʄ3d}8Q?{-dβO7IȸrQ|سxjJe_V'ji?sS!fe 1-q)&ѩO/.KMjw+3CLb>KI$TO,t3s:J'pWi!$%L):4#9<2 Hv)Rd1)Z(zn*I?FτL ~Mg,}y|_X~9w0>Ƚ_.,f{cT-*6yw ӄa$6kSIxT2KdeҶγT<_}'dou]wˋ?DP@Z.=/NACѯ NxVzs$/M4{p-zBmS(^w,:oo}vv40W*9(lKGY5Y; Uʼn{@K'Z\GOV_n'ɃkH2X6hм]r~Y5o]{hfG"J+bxL 24Dsexu\^bz 擼iGzCλูsrolwwyr ;?=rNnaV*>s57\Z7!u7tN\;qWufi;&/fs&)!cfr嬙Zq9kR~.grt9;1p 往CoK Ur6j46$YF8 W(WPpj;Pq5D\q+-3  P[E;Pee:j8pK|J W),\FW(m͈JQ„ W ppr WV/Qee=j@2*ǙUG|.+94rZ%^nO:J>%<3_b*/͠6oqֿ1F?NRTR)׻_\nM" L`9b@&jiTGLӆ*MBڳC j$א`8P-=P8+ˈyQsqm@`(y,\Z{T9"\M& wh$X 1U/YWT*3\R#Nmzʴ$ \`T0B/zsP}tq8+lm0Ppj;P$#+Υ!!-94 2bCeB\ \+,W(Wwj ;@%|1j+ m juj%;P#+%46 \`ur WV~ Uj=j2rWo~S+J;a0}˭WW7EAIݣe6R]fO7ϲ t2[Qg~orhoCum>˾B]A:`cl(߭_-m`?d%I 4ݭHѿSDP(J8Z~LhNP\ۯ-\t_.;̲e%J85j;x99I,UuY3b˳ jYmزlYpۜ3N$JBs[ahLdj*=kδʌyفo6ʹiY,8bSc2S0Dʀ@0$˂Z@]mR;CZ0R%f,\\ABUB .b ?"oW&W$NJ2’R5שr<)ݴ_E FOC3*3¶Ʌy棢֯Q{Q?u꣠Dq?Ask+88 o.jEO9+j )+n7L#Zh*Yl=CeX~5/;;0ZU%{ꄛ.[⯢iDb[Sj۟h78KNlq4IX Q~>^y/xsV͔,5cp?c( ߺ7uHoPU?gEXuVrZOTb `|;f*^.:A4'UWoWZtsl*Su~ @,)O)j2X'?/wQ}W#>6n`fMZeG`Uҙ' 6"e.ȕĄE}*+!epUkUPS'C5g{ߪjRl0߽ͥV1xS95'#Oz6R_AQ@c0%>kGhMt?g_ũ;fw9%7TUTݷhZ}whbO˼LwwnyV̹t36g FQnޞR/Jg4cZ[92j{ItJ:}綻׹EK];'+V["mT},[kJ] Z 2d早i~ ̯,%-Z~,Fgƴ/5E;^O?ʯZI_ϜhcFFHia@hfY2JfQK x)ve"S4^lB/T@r16o% q׾*pスY:-!2Μ?dJ`X㯝Q($|tm ܎ZsY569\)FJ;$X^%&2\J5ž՜ڌccIiZ6S 4V`‰ rL}w;Cft;ۙJ@xɢ\ˠ>4 ,6( p"\Cƴe\vB{몉`Ad׋fruǑ!TҞ2 peG\i#B &\\KB}{erՋqJW r P}4#+%3" \`Y \kCxʀq*q5@\ n( p(WDWĕ@+P Z @-B\ E&,\\C'$?E'߀V1lJ)nA OXZԉ(aj Q5}4t\FC6,#+T(BV%4jqe%+x8G (WsĀj;@c_8LTJt`qfr5ցZG U~ና*quJS% W X* @R*\ZMxq*{|)Cwh(k؆r;vhQw\J.G\ Wқp5%]|P.ӡ w\J5jZ0+kٻ]|Pe^>q2Z]!U\KP ժ UqjRڊ :pN3Վj;P%#+K65|**%b7rk< 'jV0]I|R]ovWw^4hQ}x(Cߗ6E=lŻؖ=Ѭ%f{=U=m1@wc:1(6:( |J'u kOAD,ik)z5xFS ,Jfjf.S =U|i&EN-xӍl p:#V˱UG)ЗHWs+³7f㧫ҞCWȡ*?Ctu`, ]Rt]щ:9=#@a6t0hERϧ2]@2]#=pYͅ:Zc2ȉ^ ]jFtlu<h=vJsIW9]ufCWns+uttQq'z9t8C3X3z3hOT@rcܠ>fKt=BΉMWv64tGkitD/=`kh>5YZwtQ= ttsGp\*hCJ;]f:՟#^F_AwNރηAۿBA(=|+}=%fDWy>t:=h:vʠDW/1. ńU(,hn 'ztي椮:タ0a6 <骣|tLzVsWdgCWUG骣)'z9te5;fCW+23h*NW@IDW/{f7#`;#p̅:Zc+tDW/ꀽ] h;zJwzt4*>|dArkQYkG>ҏj) Xo;.\_Dvcc~Xt\6_hC 2no_۷oov}ȵ,G$]aHiwm}-w~1`CF8e҈|yWcm{||]ެNřۦ1[|o{x#{;/5+*۟[$״]Ckk"~B(ގ;(&-<> ;ת 28 E >6|s*#z#/o^~?weǞ+ :*_OQ[wB*hUrޔhD.vFJ86#L0+ $A|؛ ګ|qӁ~1p~QϷp9:"e5WWߒ7J!Zx8d)*klЊwI%I)&&_*! )fcT,QWJL\n&z3& iI}$܌˥Ԧ BElԱDr-mѣh9xRڳkEj|j-%J`bьibK1lrE&Çr^}x}I5KYKIX&%\tɚ%ep8VZIARKA("1MkGfhf n$(:&ΜUrR၈&3և-2d*6!xS!Ah61TA4P:jJa0 &?A)W 4ʚXZrhYlV~|m="QQ5~D_k7Ҳch`K" ֪KF }2VWAHT"Ux,Ib69ϪHчUS*AfL c&IE2 VA&YR4JFuTX I.%QC`ʤU˄BH f )˒P[nECE EG| CsVs+o5ed0P9b%WzWb$ty2C8Fn5ei ,q+Y'eеl 0EXL%eU>0\羛(XeEiژuT_9TuFdPK ԂV[9yA˔` fQ"\5"%Vu`6˰ sG2)GQꫢH$(MrtS Dzgd,ʹP ܁p 6CAwSA 2Pư)0eZ 1) ȄI@11۬U:}n: : 9ܓAE4C10gzB`@Y\DD5T r PLh_ f gTtgJAQā.(ʨq`j “,+yD),d~C[ ()'RnݕK("ʏR5QvO J2HO5-d^>կ*2$$G'P(JD2zJUh "sPuоƻ |w.!h⸐Nlx UܢAńTE>c88h#(*srvH'"!pB@!vlߙxc^ }Zs^"^z*hƕG5-o}'hX G1^*4Ji:YM$t9|4 \ p1/p 2 O :!.c؃ZRH VWgdIs^8ؾvb=G0A(辤#$h5w< om Q*޺,\Ld!*T?V<{]bygUe-m,93|Y͌ QHCtZ-x}s󨓹hŧL,Sе2kѨwP$1;2y1ρ 7DR}IHa8#tK_IS`ʠvК&6A 12v29*յcAXx kRg$1EvRh# ňV3 AK[x5t2Б%9sFhΤ$1S"ʪO&Cd~ȃ\ û88"vVxZ4CIe 1!,j %flk'r(ZJ?@gH'Ytg&*kd gcl0lVT! 7ݭE@}AoW .![j.q3{fy}>ԟq9loӮWuϹ;h 0tqtj 4zT&׾ak0vPpr(uVtk̺k>n9œGBjQ%›A27a&E >D5HcK eD4)Qrn2*TA3,0%R!J[ 뛶u3l;l n+FqĥR 4b!5r%I^/Q^0 b^sLexSe H!1xT,~ x)]ڨsacP՛iV mlIRw -<֤U TR%(I~Ϥ E!W8lBlO?MB= O;[iݧPcf .B z!@N ($L`(=0hMqP/ZWF M F9L`p@zO^3xp8e_JE$kЭuA< \8 ޺Ic6WR:D$bQЬ:匃H8q2St\Hɰv❀Rp%sOAAhYBMfpͣ!d7$WnI+Κ0;=k+eŞ@ɝ_~sb׋-[r9[,q+jakUtC1z9|~ H?48[jo67oHkt sc_VHjϴYS:o.?[lU8vv=>?S\۶^l5Z?zh^Ai:cϧ?M"xWecly>lJv6O@W؟(|{|}Em~ZEI@.W8.㺾(Zw]lU?W_Wy|w875֫\p{]Wqwu3\_=947)<. ӯqd@[ȝqKy6ndMIsb:^gU(b)44]F!!{!C_qmO~^ X7W舡D6'0!SNO}V6*q80mpRF4OǤ~8A67{m%fbNwgp.W:`?wvqJ+vqNˬb, n8@&F۰,Xr>V:j,5K]4KCHj.׃%j5LuO{z^=uO{z^=uO{z^=uO{z^=uO{z^=uO{z^|듦wNLz]h3w^QC$ 3zȓ@$'< I Oyȓ@$'< I Oyȓ@$'< I Oyȓ@$'< I Oyȓ@$'< I Oyȓ@$'< 4$P*$P[$d& $zM e$,@E~Oyȓ@$'< I Oyȓ@$'< I Oyȓ@$'< I Oyȓ@$'< I Oyȓ@$'< I Oyȓ@$'fb,ZJNI%9x$GyhI (]!Oyȓ@$'< I Oyȓ@$'< I Oyȓ@$'< I Oyȓ@$'< I Oyȓ@$'< I Oyȓ@$'^=./OTKY^/ZP/rC 2 . p d&R9>t̐ ѕr1NܾzוP"ճUܲA(&U0uՆ[yZ]524j$KWAWuЪH%GCaU.lEWBK]WB8f+#ҕSHfthwޕRບ"ƚѐ"ҕb+Mػj[?&KHŌ+Z2ub9U`tɌuu5G]H8]).G+jRJY"^gĔٻӵWMypýFx7+x/Kf.C[~mHܾP +޾.[1idW̜Җ5]b,\4S L]uVtARrr]PW9;L3ѕ] mw] eG HWeӁN?g46~_7՝he?HH _m6?t”}wWۥ__-W_]Tiy5ash8TbBZ/PN+㷋קgR؛/߬s!险zz. ?w+9U^8嚋aگ\so7Wb=. GGZɁb8jI4,qVivd +q+/M9vc<k74ar4۔Z~m__jŏ_6Gܗak/ͯ{toOߞ=gW_:=+p {oUZNa[y#3whAiRA6dџ'dflF[.< j|aq|N'v>^wH*}wKn|KН'`c{H KyZvhSFCc;lfl'B3Ji>SvPݓj,ut?-|)Or{q_룠F˷+o|5fcRn~w>HLE8Y p lƧh&~z)e<,o|'?>r@.}rG !A{' F/iޥr I1Җ&>\bF.z/g_|WkѽSJ[{_2y3iSRGIЦdhy3;Ytx6c~+'>Zє_<6CPr \K/ YSw4`>isp۠NQgem+N~hX!] 0ѕ>Yk;JiוP 7=_W ]).] m w])%j !])p"3RܩjػCu]PWTSyһ`&;+يv߻ 9*ՒޕHfthfJizוR&=Z^ tWھ%3\CVhZq> Ġ՟Q%njʦI?mor$ûu)hh]r\Z- R uC}9yB[?oVݔegݓ_woqvw?ׇ#lޣp.W?{yc7kx^e%,yUVAkTqWqdw*X$oIM~\3*9&One3'7qqw&Tk V޿}ޥm^/>-v>-|M9,?=ù --`̔N}vm~9R|Dtΐ])>IWB[JI>;G]U*,-VRJq+Yѕu{WGWe˪/!Aӟi~>ކh!L|]KUՇM1])ԇ4Ru\W3 ҕ';RܒJik]WB ]W3U!tp[p+VtR/]W~S] 4W3R\NVt%w])et]QW)T,] pNvtJi em_ՓJgdHW\ѕ])-v?ծ亚j&=%g0R8tC.ޛݾzc4hZp3D+VZ59gi.)SZJp *^RzrKɐC6+36udճoY|J>+̓?bh몍4j 7]Wz`bLtՎbEWJw])eB u\ҕDft++RکNmບb -Jk2+Jic]WJu5C]IתD2+lFWːJi)+-z]*bHW \])n53Z]WJu5C]ΖJ%{a"{ǶB 4-kwM+nB+VZPѴkz>.hRf[Cgl%Y%Oi~iTkŖhmgM Q[l9s3hf&Tic}q]Ct%])nVte1JultUzT ]~P'<4V(iTJN|WrBe^S54 v3+.+S J UJEsZ*ECR3Le#cuw])er]QW5\-e$])nb+R}J)<]U!gJ'ԺjÝxOF認߃AWzhtJq'>O6Nt]#e& uEڧޕ' 3R\d+RZJ)sr]PWHRJ 3R\B+R`P)kt]PW$ِ&̷>םt{])eq]QWS4+T2XѕRHJ)ѧ稫̙] n fڕ6v?ծ{WU<:j`} ̘P%P/wa)x1ihT!3`_JF'4W,ђ &y"* w])%j5aA#y )-v?gg+ܲ Tf~Gpzή ě\DQv6 ]Wz ҕdFW[B+*|Fu5C]Ő2fCR`C+-vzWJ;U2G uA5+dFWrq])cQwוRItd,ѐd3RJVt%w])e uSCvzWKfzWJ ej*?c8r#0ۗHZ F [;)WBwY-MGw"e;GH4*;bק`5]s ϙg9 8ו،])-uF)?+mnH/k >TY˻duJӢ<;g@!kF(@,k{)o.> {z+hjU{˫+RNUu ʗ?"X>\֙M[gvSInJu>u/jSlZ;N]E]a=7ЛlT/|d+&$'p'=[<ւGQ@| j zV(X (Hq+,sB{7(!Hex&Azb6͢~Vlv<fTF*`䕝*߇sw^ Ab6k .IvI㟤YdHq.xiRww׬ig Pv~׆izXu^dly;pb^ѲNj? w?̥' TAkoS|PJC"aw& -^53u2,ش8[cL,7q=ìUlkBo;vk4-oܿ3 O7`D< U؛\C<Ft{o~p~v)M/V"E 1uXuf*4DV QWՂ珉gN# [_!xyxkٕJbʽ5 `?^׭H\6"qmD(%А)" M.NpXrPQiȘ*dfD' q;rUhj;6}Emcm 8FXnKnz-BkYi^kC삊 ڔh"ηD(O|#Jg,O6b׿3 ?` TW;lˡ2Ir` 0ק7vqZ^FltkZsWW 's&WYWUWNc,i1Cqfog煬]d!с *M Ҥ/!MάMY@eHCmz d(VTg!cQ1*dU$C,\gY&J*yPjwvWxӷ}>nv]|WƠWpycBB C̲ yEwwyX+#ۏ͊^L~dU*Sh,@BR)I#y$|r'z.8Q9Njڏc 7/@SՖC9 ziⅲvQ|d [bF.chK'HbOoI<7<6S~(+j]$ns\zqP<5ӏ/C}q*: Nɺs0trm@irqqE!c1Hx|^|aʚ!"7ϋ1B]KNK:EJw!gCN/8rgUg~VmɠRRypsfka L2^4a<~\ Jš8q[Sǘ^ce -oRo,L%Ǘ$5x?Q7ZzYhvIJ8n(ek7@fGU!fhj72E!ʔuspr!hD;::WǤMD#Uڛ44 g0%u!?)9@] \h W7m‡Cӫrx8b:!AaJ00 8iENxr4)o_4_l=ppݳSنm+YF\H?P6Y44T D)$aDrQ&N*q/#/L$ĥpDǽ`=8iO&%*=vb=}XLe4GEq%3$QBD1Thk#.=T>XłVq2sXx"{zpp,ٶuNeR-HW'_f$Bsz)X]I@kҫDӎSs-I"EWRऑ/Jζ=hP0_k3đ/yҏMGBriS^{-gu;ۆE^w @f]V^ =J_95Tan͢ofZZ (gʌ0VX=Hv$ScFָ*^mq}s$\kLlbԆ>cm@ѹ /896,?s7ZQ>Kcz~p;>hj3Z4ۓFy}&K6AT.kM%Dp^+i 1Y.k˯UO]mQQ>|e=Y/fw}h>^"Ki^,%w;FXӡ$"D;r,e.v¶ F~4";lO69joO$jIo* hgN%:^ S;cbKmIm!t}؀B(L *.Ov}%K[q!=Q7,øYC#VPN؎[AmΡu%eCtJu!MuHp %4Ei ({:Yt oAO!gdoaRG~U,^^pv'\5]֖8#~8n;}=gVM=h H3its>'$H ,Qhx@*J/?^}kuz\¶'[jU+Gci\΁]h] !Ѣ-"^|Qpc21e;ȴ;0=j/dmeűeܺ٦ԋ(.w ȤTYp~b>/Zux(0J#a z`ܓ\gW"T0yAc(zJ e["9ЭpsLt%ie W״E!jՏdťRH?D9LRg¥*3mZǝѵԪ-?w]f PT:0jv=r.!/h,d9//K`ܢ_]t%)[ҏ)iUt W1yX<+I%_ߗ6Ie*(//`z>JS("}SԐ6%9* vXI>Hٳ6IJŤzI8,A$: e'yQ!slzH.9 5\p2Ą HЈH4uޛv./:ݓl.z2I"1 c4 BpaHi|\p32gœ=&ѕ{!lO;$$ $La, ̏TO#%E᫵T qڢ55JݫLLg3*%^9gp|Ơm5tQ{}&*3F$=|.!gz[DZ_;+Xy ͉c;ıȦLR".Gʳ%  '0H@f$"üȑIk5TsY r-0*fڃE#A4gZ"VTRc~>559U;tQdavvME{Na/,`dt.:nJa6(8\eZX+O!+ Yjt"Qi|9%[ [6:_mnt:Ukָ0+Iu<]i7.Z5m%#6Np '\oH2:֜C{OQەE`wƴx UҚ=OaXrRU{(e;0&D5k \.ut2\XȠC' sSf )L,\.T?9NpKOYw9~_mؒX(|q;Z!u:иbvg4.fѤ[WHs>wۊn甶̅*sw.tYoo-5|t`v=cW-#ApKFARN>]:^wRg )ٶ'>{?OMB okt8dkd hA^Ƞ):)m:KmI҃Ͽ*!M_wGX*?7'w)j:@%UGW=@ @.sU|Z,$`fTc5 1qLZCl$0ج9%SIB_Ij.[`km3N:R'{RABEB)|>1slWz}A;,3HCs4F4pكBUuh祻Cv&3$ EtqМ":{Fe?d/ [X/dh5R^( !G.Hïa  h2,G~i{| *h]j SWGWov=Z:d0!D+Cqp 49w &^szPj2S=v/9p/y.x_B%"os?%{"Xtqݝy2-3U{DIoz^a5Zz{/])/o/?3(L,#p'7d&]Uk޼/ H)-Qd5{|߮l]Cnǘ?IƙT?)2Kw|Z1wb3 Nl*G_h\*_k~|4ׄA6ɒ%P Ħ䛐 +RYr^3ib; :@A+5)?9rh W-}9|LWl< sWOd4Ҟ;fweHWBVC1:3 JܷoxM~-ʖdV>܃JO``?-n˾s_'FPq!#Aշi*g7Bl)͋A1;gDsƲ1z{P`+Yb ǾI4RԨ.ֽ5:ibQ۾IsL1;na/+`47'qoXN;4z ĥIk;E =F1B}6iK]춓6lpS\fOI ) i.NW>=T_Tn)<;PhR.G_쫝yLIw''N0s)䢧p9t q* ow)rWƭ$mh`5n\Bnu.;^oAF;\9 q\#!f{ؤ[?bk mӠ5ni.q2z%Jܟ/3//Jm?}A)LJPP=g"40}Ӟ…WZVƯohp.j@ϐgoid*0>}ﱳ0 ]͌d]K>3`&0}P\44 zn0~VNYT1>'Jxs䰽'H)3kS%df^W^1%k.x1^{k n#mb k0D_@t)O7>!1<ǾCޱؗo2˺8&:1\lf?)4ά9e_7mX+yX*)v}S 1мeIm}Wyjo`Vlyv8Η~{^aZ(ɾŋ)*G3r q(cOuU(tcyl2"/@I>}H-%pIua="dSu|k)Zcm[iΙ9` )PfJ֍|t7=q*L=_WM0SIuC֎[ 5:v7 .W<հD֗ ~7aJl^I.s6!hPy-=T~8, ~Kp)tn&Oo?-ej Eҕ?jmb:o$X{ȸ1Fw9觀Q,6ΗVĦC快r3#5r8sUc_v4Zލi.4o96v:䝮 \P9kִ$j^ SVdJSBJ8AL!֦?,JZˉuz$YyWCb(>H_$v%m2|_DБiB%aϿ׿q{DŽ<@FHilu&. p4OQٻ2 {ɐ+>3Ca }6TCͯϭ2*$$M˙) hF ׃:V|ZeSH,w9<+jN2лDr4k |p8J U\ Zs]r׮Qv|&TpQJ>LA+e$+DnZ:Ls&hFC`*Tl b W|檣d޴Ǟ%b eןY1>MT/^/@'pYݯD*Iy?0iˀ#g4CUԦPMdl\G. 1)iBV~/ݚYȒOAjPݑuiٝ﫡JU@Cc歜t_*5jDiPKĻWVTЉ{y+2&ݫRDu~`jbK`̄OV1T)I[ᐗ2֯#7՟ ohͺ3@nyN) #R0LuB^y©hNJ)ے)Rάݺ^NAUgq9U2fC9eW'wcT=]CJ4dg]j\S1³֙)eث{WZkcfGDQ 1T**]ˀ7\St$LJ8Olaȥ>s2dZEqՖF$IHnu#d}2V x4'@6G3`0.B SE$+MG MPM0"fmZ+hu&%IW Z_" noJuzbNo(z9=ӿebeYD0|CS 8^j ~oX~ݝ0&"'_yl0!Cұ! Z?0vvfDMusd:*ue[ 7DL+x(SDDZMjۻ.HY&ci(CT?'ʁ* ),g&r.>OQǾ5ՑٹR[b7B7J#{cU$7np)&SwJQ?XIVٜQHdS:xI!'c $8BBp׊[(2{>M=q8Ix Fb9aT#GL  7B(P;:I9NF/7%4-%dGh4ma TZR 'P5Ppm#,&iY$>W?dy' CQa=HoovJ-2C8>ֺ+>C581zqp>1eJgz+DpF@#?KI`Oyb6M)Hs@Ha22sML XSRڏ4KDJ1sWvY蠯isjȀvcܑeFy) C P*4Y)҆(eƐt2Ajzp݃zeRlJW٘7(8\=2-40?{WHr /kVއ?xgf<yJ(JMRG<%eXE 59Z""+#2"J.7ڱM-h,eSt.z Khog&nPK«^8ld.RaKg;Xu9w7I/k `h2v:cot;ݍkώd%O,4MA)<\rglL@'N9[_2سuw޶ _D Fi9M)bb'ytmRUDV?أH'm~wjY&E0 8zg !5Ahϝ.2vШ7 B$=p G*⽄.:&܀YCrwR8oF}35|T 6)+, 8(#Ac4zp֜zp5:זo j,\h}uFK j4h#*-YNAML4j/3b^IH~y.*6-EDJRMziH%@j@ZMvTkܚ@6oȁP"[rlaRm;WnR-pH |^ Ckf[M['8ŬBAHF^Jm։I jJލE5^D*C[Hg_e+C栽W.[2=/X%g;/v̬Նt=Y_~>hf_aCMp Mr%y-Q%ڀ$o_Y^`͹ f;RFv\uµ5@5^hLWg+K ,QO7pb8ei8_"Kyےu x%uڶޭc +`4hr jS^.b-NWx!Q iK J02FThU "]ͻs)3_H&vRw6q2|?O?>O_"|$tjq'2Z\yF\ˬ)*!UQ,MR#5L%p&ddoѤ LE:Am[#>d&Y08J,8>Ąymp rdIyfZ`vD &jBn1/b['FDꙑ ع 0=rڑ)3P%#ƣ0vXCqxH,*>qjܟחk0~QZ^._v0?FNBCOsܛ]̊aK| a;D~ Ɲ q ghG?C;ѯB;-n<0(ha  ؃EA `9u V~oc4z~y6zM xM'=8% 5Ψy;f{`z"smͤe?sԉ~Ѧ }y<^P[}2z}w7NI*.1#TBZlH1u6J篇ߒ\ WJ ]\}uB:U4cmeFJw` \Q2o6G-h6ZʍhT_?+_E# ,"0双0B!z++vG. =wV#c)G1@85vʾlK)y[9;IԫqGDVQ@CJwsNx.Q o/Զl:ͨܤw/ h`cSrYT D jlח+JmeLY?%L\?E ]s9-ވ@.Am)cRg[PAM\PiA%nIhJ߿Yt7Xwx;kΚ٦!ɛ3gln2ӿ?ØkA7i{ه=)l=E7rZq/ w?pYqFš~^45wgy%b]U-;qY0%KQ.dxyoc=;?m۔#x(y*wC7or z IW5?M FzCxm.w?eIgHH3fc-[.M2izz GW5d!1ܣaxS5UφH;cLzgL7{,h,{ļ 8H TŘʻ*W5y7X0 J"/8)z3 bɚ1!Lҁr|j\(+˵nU51b`XpBQ䬟iٓ[H(o{׸˹}ݵmt,xbGr=W_C ReVe-Yݒ"uwwUImK G>\Y>\>x{1 un*e~jM8O1 ^NoҤ<^.y* Mj~ݧ`y1* K/k>) ntL1GlT4N.h^ċ~fIxin?#'Q UG6УՊ"ֶ2K(GLv:CէVA7DŽ]>v8U:3KkdQu 5I:cx G~J %a}gۼ^x]>6L2'PhztUu_J:p7&b#ɤNTN1޹Yirk4I`+ #Qރ.};*yEU[Z34_^WwPLR â=׹0A|pjdk;hd[a~:t1w4c^6oG={{6^nGͱfD`FVU9wIJlDP' dS^Z*]uāB_Kx~\îDl֒k.3ՒH矢ﯬdןɋ|[qSV/tVs9ĺ>a^e#:^^\"ʔQbL ##FcJS5AUSuflQ]ΣRhTB^$ \ 0L0}=q-LUs%T^AL)N}'+$,"9I66u\(I* l W5b:LJRM-diM^QD(b XYrK)? N.Zv>@^dq7ZLL#ٶ௕)' ƆѢS64)jR`D<ơ#OGMkSP:ݷk_gg r:bXH+$A(I#F eqS<5̅}`γ}{4ߟE׽yn Cob *;8he*-PLJK^0fAQD8;%T"1DZB%WL.P qϚcN."|FRҕڨj"dZrj>admQ pO%nWocUf Q7% (ucDPO5)1]D+\udÔEtÙtQ 2ɔֈq! :Q,!XWwd"6j jr6CNRTFۗj' 9NZ z,UPp˜3 Q$4k4Q( =ӂ %|OE[ED"C|e&fNBCeG AjΤdh}Dг{]fs0VIJD a4t n-[ݬϊќF -fqgnϺد+E7Pv_*]xX%ROit;jzp cm-|)+GvJ:ؽ}JŻQ]~AT?tdI%:2 =+Q[TSyY@Kv䷹3 r;C,]In:f!iLt"vA 6,c*(C2޳it9q!Ld@R4DV\l*#ZãvITsGOݚ,]TI N_d"t+9yy3\b5FU;*d!eNypѧgͦCpj{>0ʾoQ3GbenWW)™Uϯ{|BV} Z)*jvpD0|ٞ1SJ0Md-U!wvi۩D=Q٩ԫHcUjO=[!5L#)MN%*]!7d%?Utu,HaEKгpV1f86䜏')/P C5Bdˠnt nG.>:kHaiP./%P&-'Ūo,, ?ÿWoݠvG|n*} FrZ.-IhPLm>-Ã_1|Spۯ+c_{-M _/t1(~?֎\?WtKo}>OPsvg+q^M)|iL{}c &Wc 0I$+(;,X]:qzbΉυc1}Nj4MBhiMmז:D31VHv­e.'h6'[M"]8d$MY1{/xf sB>-׺TTȸJ5G6kS{wSnGg2BZ)=Du)c70Ap-'PMQ"Xur_ mWW_Lm MJ >w98RգGd\#v2Y[FU%dP5=wxr },FB/Ԡ`"|5i&qa*FB K`$2\E8LDkJ毜kF\Jp$ǒo^5ьuJ{isrxDQNO4>PwFl^"Ԙ p׆N{\b#38XTwHhSbi'I4}w@N(]q^+B˛L'i GfŵH"t?~nTfm mtw,Q{W@]2]OY,<ޖh&RjocuG{@&3^bVaڴ0yrgq0dQ8@m-ܯd!I- . }^򀼼"ɉmJ̣~Xn?є 쑡i/)k*GMQ<|v 79&hҙcM'2ԙ nzJu{ tfƭ(5"ǣűQAT/3\x5COQ۶!vo2h湢ʱ;kH!R-7Xz~QW(ٶN~t:PɃ?1PۛIf_,bQ2qy Tzpy';NSE+d\QNiݰm +d3wV'"Li#IGF& ߩƑ Awz1| *k>_wapLJ IB#i%#CLeȢ Dgv$t0,u_?^(^{n^.e?4Y=>Y.S.x~8)2+qf??J82pɅa)b$b#1ըBƛj{-Fsvb6=eוlA]Xxl܃"ܗWl{uͼ9n[ _ɦMV'PB E*yNT>\WFR?)>F,5S~gj\j *d@hB9*d38ZlyB=xHKj?痧܏!W!28VVx n,;AH_ -﷚MGۋAxۅ"Xp}]Ҳ_cJĸΆFu[wwk,R{.RK8e{RY(vq2z\NBFtTguXbԠMsjq]sJ#߲q .CB[>WSw%Ww+-Rw/QGƩܻBFkp6ͼOu {fv]O: elLN="gФ[Mgr<=pT tCITɡ}m멁Q ހ?s !mZڄԡ$> PԢcD>"q=G@k] Dǣ!T!.ER!?UΨbںg2Y,"g /.RMȯWsjC?̯C\}%|lSVpAXOg= zPɒ:eA9UW\jadIKE$m?,uӫɠTm­L+'z=F5\@;Fdy*/62(Qυ@{"l'3=7Jeb?znKP؃[ԟݟ+()(wM%܂nbt{+fJ&Mƥ=%ɓopuPm-592Y 7;ޚVC6ͷf&bRI C2%EQּ<*gݺR[En WH {{΋DZViS ƔFk ^Y:vz5vQǎO?~;3.;>$ɼ!L{.B', K֑Ȑ q:q%!k0x? ﷷ0G7u -چ:hMH4wdʝ0 *htB6V@*w&0.R0iRSʈt&g h6!Upqia5 kOY8L/ ,B'Gq]"&Bs wB7ai9Z嘖x#;Id1g"!Tݫ=MT]*YV eLdmYc)K g%TYeB9 <56 i":juPQq'S Hաj|@"T@OU*25<=U\LID[µ:Ul]EE\d2,hz5nɭMKIJeA@e[/R-re `]b|rBFjגx\fJh,׾qhlAX? 6a[HSW13O.:mBIo,(!Y?ZtX[cr>!3 .G݀Q}߫ _PQɜijTR }Ɋ+LK9hEQsTx}ugW4Et8Ovwv@?]b0#7L`~-E'\|5)ER'g ekke",f`_%l,0go=torlK2&WcL:O yXM)kVD }(t JV /f<"Z &c= $DZK-f7[r8..N\gPJ`3B.KO4rcӭ=4=y[mUv)otʼc<,,H༴zRx9Sd7˛A9.K2x)g٠BSD ׄb4{TWBPYCfSJ NTEʠ-$}X|1Dڃ1xxpڦٰsk}D Zgs,[*yΒre -v5 ځYc]wta>oE8gzBDɓ}rƊBhzzYlB+Մv I)T׍ Ǜk4]k 4g yo;/>0Ks7ƛYaD¸O?oi4m obm.FCֆo-t3@UmDKww/]K7ujuޡOLBGjЀYeA=@PxD"$h6Yz"eyZukKZ#>D- < ~KZHZ חBs_t/~}'h@ǨHʃr ][/' ++(ռOhKB$`𯏌UĢ+˷rܳ>2GV%(q:`ѥWN$`%&OWC(sf42'f^J̞}7/ hm+ao3iJ݁~OALzuBwX} a>,]=GcFϕjҴ M2:#RX"r"Y%Ys;YC.H9R5(s AyÆ&t&.t )YD]WaT^'r=6l46x w0C :w҂<7Ng#A*a(#1IIi\Hl 0d(3Q*iKUbT]CB ]饛%?V*fOy >X'UCmfoPd u]y`?l([6YbEmKac}K7xg@MZ3yl̠ulQV>BG*ZhT0G(Qrf}gR=ʔzmWD@_;ѻΏh>,A d~ĮǸ҅2_5gOl{y%[KA=8bOjs?ctZSSsD.'ٴ_c!{CπOߌd l߮NOK6+WJQ9:bx[K3rwr m@q G@#WP} ݆UMpi&ÝstK*AHE/ϑi[ AYTsLٞwY'ۂZ/Hiavǖ뛃rOb#zūq}xW c4rt2>j(E=~`[Q!;3[\b aaӃGTFïl8LFەT2g4豶ONf|CWt9=A;+y1ߣ1ǴbkJ @]۝Rع<*))nRZ+U`x5{8G2jT)FH}a+O0o?O>OtFr)jy5ԝnGD^AiM8^T<)o #ĥ&ie@ׄ4L%h2YF2ɸyJy;}orF!:l^6Tnk^naCkZH|>Q[XE@< #(@eʚ8r_aecc@2B;Ǜf<-YaeO_dj̣d`wuu& |0oruӉ,<F @zO16ufwGUZr?;XՋ`c}8꥽U0ov0/*= [G^վp9/g80?!Ds *j9o9?QFlͽ6ã kY^.y2{ꥭVnMؖ]q1z`{̤>:pdtY!}HIL0}Dƹ0ӇBz@Ln9R1xـƁC;Ӈ>7&Gfhȡx 9MQ sv PP *N-$lSlKRځ@aP1Es)ZHjg[Di ]w4Ŭߣ"$ ,"EH|JҀ``r[{i+ySֺrR`ܒBe+TevV-Z5ɆL*n;S$TY¢u$~BY z09=BBеl`sh-8~2-)6 [$@Xe6bKΖUjEHU*? Rsr`C 5[n%/AL&/{ ֏|`c>1fC/ǿZwZ"KXkjF&t[CdרK19(3Ah\:Ba6ykamdc'zfʌ<@k,aV9rOoc3!fٳusl6lα[[`q e.`uam̡֙{}}s=ߑR"O ow%UK ~}Fν3`[_y-{p2| QQAq%O9K1aL垆A.9T'@xדKrJCk͓Q{yɘ'1(S N`{(-ȕ{!k;'cg TF^U kA cQe,2wm?``I*i#?e(ϴ8 Od  7Ɇ9r>GW#ڹ2'[Ϳz Eؼx_Z3گL4Js@Dts 5b806wg&xwKmQ 6bM0g}>]NW3)هYMf"ߚ,Md࿮Ju#R vQr0`[OD!}RI]Ơ?fWf^%_F27W/t7_вiJn4[؇""%C@%T-C. ޶$>oksl)^XbNSE-4'.-wr>__:x]&j[j0ҍw_:t672eKᢙ|DJs#8:ef3odЂPeȒ%4̝cMw٥BEP>wj<9{ΧLvܪ:e6A/& ǝe@ᄒB6<>zr]zʋVSI\vzWt8Յj :|Ӂ{~F*Pi"?q8;xp}KJJiH@Y=q T 1I^k]d ĶRs&Wo攓K:(>b#R} YGY!TqSƕzʗ0vߍD8@K!! ʸD4[!󭯞H\Fbi)2f0{[klC pC퍁PrZ T_TF\@ )WU8ő%ˠd&I]lb4[z kQ%J/Deֽ4?䗳e2,B \K|.JM^wr˹kg˦~Ѥl/}g!! *7p u%z+#ڨIW)`kv(ZlQ방ǪĬd C(_ElbH5Ĵ*z+^$z2{a9Qoʼn ҩ98ps aH8972}Gn>Ŷ1601( l q-R`QxT+e{AZWnV( ]CR@3T[ ȼ?{WƎʋ>M, 9L/sXmْE5O"Eɤ")XK"D.|{m!U.*vK,%u`ƚ\rגc 68h TrЗ*oӞ}M[vƤ"N>n!]Y" 'g:!O:K$t"l6A>7rNN7!?8 få3WMBdGbDs(=a6jI*FQNP"s'JCҚrJء|bJ7w%$!y2 `Vvd||-ȭdٜt#"T MQMqY e-d4|ztsdwNJ/׏{}ppU\v7Jp [{}>+E4SK I)ݼ{Q =)E9=;bwRCFTQoB(_ 9 QЃp*~>9?UӺi*Jj5=9#D7i7ǷɈɟ`Hs1anxǖV[4cc#l[ Rc}w]8sh+F/vN'c²eQaRg՞C.򻕧EBRNj_XqE{tjb@,Oߗ9OG*lXûCs"ԒDmTa G6,ѯkls3Nl0WPaf%sɅȹj`mh=bȆy簿7Vr(BG"MUAQ~pq=*g&Z\|ԨIȅ\$7Yj 1n{`fD=7_98F?hպU2طpqy+{TUE:X5W⯺zix~3a2^0 䱅`9ލK귢,"YoNsGޅsO6]O&ḇ|1O"cn;|L~4%?wck&EKFѵm)~4mӏ0GwѱI9~BmikH:֖3ls f:kK1Nx.BA`m_MY<߇߾t[D<ȶ)߮y?y?pt1UkUBvcە'DD,Ö"z"D7vNa$nmMa,?w7Xi"/S@T* ޞ*}nվAD8Z%Y]6Zc>Tvܲ{0ߧ!0\'F)QD[=5}98W=7(bE{%MֈR}%*Ei  &ЏAUNGoYOgo}NN+T QL`mK$+@HWCtҒ퐑=*cd~M Cq!%CU5ŮNF>NFS10wWEuV {T[Vգ)蟜DXWI~bu- } qf͒},}s?oKE:9p5MQQusxcA2Ym*֩}'2ruMjEFLkc1(ܔHwXN6+%e=癦y]TxڴMg8 R=4吡NoO;mxXghp=bNx[JX#Pera*/G ล)9!}Tnĸ Ъ9p8v{]ʉ/k;EQ3Cy- ܪqVp:(=|Xolw]5xf\jl>YZVFKIGv=N0" }/j:eKh#7yYӋC2=Uϟ.g XXYj5=M F5ȱ352M;4RvlT):HTt %~VJzrgW¨!#%L%hȔB8'id<6O|Z&Q%XuXI&ӔAk>6M]aPPtV7 إQnmP;™:,d[?j;G{us'Ps7헺iwm㵪sE~L18_̡Z_\r8/ɪXCQ[ A.WXO~&Gbmej7iH*[׵љTdOCK`dhw:''up Bg(Ըuͧ[A.t<``Z V%cZZ*IhKkuVcr@RL3?ڋXj$#.2a9VN>c5rW397:w||+MMWE$__Lcf󯞰W}}:ˎ|Ք y^_xN!9ŇͿ>OÇ }NuU2N5e( tUĪu#S]c@PtCYvnRI>(hKroY }$@rW£{ Cq˘Y$JxT£rm {&I^=S,-,\> Nbvc!yIB% qU5Q;-!^v a5ڽ>j9Iݳ=maP0gX*\9g,ԮP~|3}Ԏ}MԎ1r-7tvڭ>ȧPgB]9 HFK0gɕ jW]/ო {ԎըUQ;3i60YjWj\tAjLVZjw戥-j`īPBM+[_ۮuO]&N2QxQR NNnoԗ6;cZr ,rdĕ/=2a5:Nұԉtn`s&a,ILA1xH͞Fɂ^0"JDRdژ\.$v4RimӳֵbY1Fʤ<#^uؘ,ioT=' UjoKCas lT1zU XI(kŔHB+dE^LYۮJzv޽7ʸk-C# Ñ )O-M#l TzgdAɠ91`.܁¤I QI-l:XEn"(@s3,(Q6, ȲOHt.9.UZ_*vۚa VW ~'aH`4TTj:D؈R60"Z/mUL&r~c,CAmvVH0sP@ 9nsldHRmQJ#=iI2:$6QkjB Cmn4[H(5eb70fYԉyY& $6/p IAMo8Lk}fd9B0&[6& _J?PlBĐkK} !105 am vkSMT&6 _V4&52=-x<]L2`ד6U(" A©G]8"{:.T>eM"4)s΃3pUm玎QfMrM#P9aLIņ_< 0pZ,۩V(ʚ4>^I?3B1+gtt0q|Y2cb]ð S͋m_H5@fdܭݤ%%x++S>Ubuo 5[]8ejIlu_Mo.㫧gC-\ڙW1Okx#\DkdyyR*sy ޳kX2{z͎Adnk7601F~ :HcC=5gPHe m:0dMvEG-Hm7Ǩzŋ'}O|Sε^SmKdfXhj~,7")zHǕDH@1KǘS_5?CH[B=cZW5Ęr_t^ݒcSt5o8t*#8/RYD羇W@@FH[/{$mIe |?]M?/ c$L} WպvUꠠ7!8M!]+].~0:&w=:"! s&կl2% k -]Q,đu9\&>W"c'gY\ϛV ֔g.Ğ *4C6Sx g2wMr4f%F '(ri[@GZZ5%h\oݓf!t+1W+,**z#*A%Ti+GD!VԖh5@w.9bf3#>721#+`[dߘ$$Xb{HՑ{j' S=$y)X.Z-C]3MP)rV*ɁZq{Hw`𷳳ua؈ r=;HcSp71RZq~CK tɍvTX,]ZwӅRAʂYJa:%Aae س:ke#o/ >d9Hٺ/H yE"+v)q)>snkIW%щkK&an A[4ڡqcSPFhJ1״yLZeIpp2\*]\yqw]\r0ޕq s鷞U|[A.7Izw9eF"mKDzX>$}V m-VZ8@ tzoshI활vHrڤ>U_T9YhJEY~p,$NlbhlS7eF~dli6F>S?#޶LD'h23L̡=x{j>_Q@cz=mƕKl|J})%0%k\92:\r+Z%9D *` UW(@(^:ce5I2nlv`QZj\LfL[0V*RŠ#kcsf׃o(&Aav/)49``),9ФJB@9OX3Ih-%?~h8v>j;*ϭovF[^H_A"XE6QnjV oyXaj򜮭'AB@omU7r)X9Ty]0&Rdne >Hܹ*L:(olTF8?D 8F/-ִJӫٹ2g|S$IaPAH`ROh19dW %yP)Ƀ<Ƚ>:>y1I]Rw#RAzPTf]Yv E+:䯎&)7=]ycTq^|^S$5-;H`+fbCѴR?Mdf0#UDl-V$ˏlE'9~5W.ܬ㣾Z^]݀)(Ej飼Cf~o~!癤FR؇^#*ݫƟդEofh\ /I>U\{ JIv Mk/]FK+MϾH٩w.JY9K~:ouyO[HQ+qƩ;IE,O޼Oϟ?L.}A6[6e=zaryηIX`g3r*٫sq&6ߕx,emD:) %ǛIQ ܑ:JK5"ٻFn%W~Y`ᭊyJ,`/^=N<$oQ,٢ZKv*ŏ_+r]NckVm#7rY O\Pn)LzS3oF'4\FIƩ_|Kt"8(HӒv~ YA3y)haʌ_~|u~08#0 $sN:Pt0X6ҰJt}]E fJ.^뫔,r:p(\~1Rъ6ނ$%*%3} ۟D1c9ܡLKrFQ'##:Y8T'';H'[ <(~U8 b1՜c6e^F^ Kc uEթlsgƋA+jO^ۚw]n5#RyRΠCR~Ww֯UZ[PF$R$K4gPQy e崜O1#(,Wd DUBle"9tIE0IXkmSTE2!S)޸I(\͡r$/~j+۝|Qsu+<|]F9h*7m\@0 &KM6#6 _dlW]{melb`e/pQޅ"k",~nWv~nWv5ВIY!p L/ $wge@y-Xy<}E3yn'lU 6bPTPe^0Kc%)hՠ`p4qrPc R,Nj-=9%#ag`\ӚA?ZɊi%M 0-$(Y'D4hNIa:.5lj5w*R؈b A46TJv#RmTS=Av;S=B=~K m&.D;er,m;j> CfNE[kZX 5YklNr*Aû=_]MVhEoA]Z2 d&޻xI\02ܟ:>UyÇy-G/OZ+fԖF5T7\}˱oPa}8e̞䱏Q\}C[%-a̎N t] GSAiR&AiR&}~oP Bk%`S!QFlLAL٠Pl@g^]2 ?cAYkT|;Ws ./r 1_Idw Jn<@ rqd4hnG+t7f?BgPFd% ,MTNiP)%MY=Z˚Hkx% ~6Z䆏&rz[84ßM޽"F+6V-,y}?gzxU}7Z!0+5kry7GlVckty?u=o[j=٦-OvEzIIEѻTN4&j[ԴgHA>QJ%zPg_&rh7Sb0P.jlV\Y_ܛ5_6r]alo#2$t$veS8}:=!B=SL5yTb_uz{N wR𼓂U x,pJRޑ?OƧ\ )ćrPkf"%gHzkmikm|}k&w6c:zݦٕKDo;E/Y+^/TrYܧ~<T.]t.'Z z L%~wwWij$ꊓo>]'LGOѽB,U&LXHX#^gW4\4]q~ >']p}K6Gۅ~s8K:NQvS)gNsnt/]*M/=,'5 `UoB"iVSTk4ūuQ'%)ٻ똾c-"ju(iU{ɘE.Yn*%q#E(تzb\wy. uSy[)e bZJ8̂qhQdL&(VChvEBzX404$!YƤ9h|vxJ!85MsKݫNU-fz𶃯&1jlsbl?mki#h")gh-XH9AoehA6^p>:I,B~@>~*wOn5epv3bl7buhKʷWQKx6Sg |9SA練6 x>_5KA9_(!htg8n*<##X3"'*g-:qe'A2e41yzrN%ߠ @*+,BGJڀ@8 {fd<ڱ(ۊ 7z-jMx5' ]HxBFX*M7Ō{CqXȲ &iYLjP|ݫj4"Õ= 7n#2xrc{,– ϽGj1T=dp=/Ø'eﴴF {GPFfaԄ6FMhbdG~b ;iQ8ַ+?t{;1?,+8;<մ(nJ#׽6!hH^Ɠ-^tlG4 UkW5E[vQ.f^l:&CDX0 o cshf^8Vh;nS*^iH_i+͚+0,l~u\~?d_E #([ϑG΄lQ.kvn 9@96wl}х2ܾR*^*;RQT>%DXIzsn_ X\jԴ7,%ϑBپܑ8K0m.ivVanFm<Ǭ G)yaԇ8ڵA~)q*ք!\hCV܌r5C~,E ]_@ϗ_ySRcAYG-Vj?4tOs@>~{{?3-z]pͻgp+rߐŖ/w"F-jGLl`D)=h+B`Y.5P__ڊ+,M m+h P!r➍aR_āE:9ԇ^B"NIvMgX)ro^S)Bf m V{^ڹJAuR 8R͐-( cP!VC{,Cz~R󓒞s?<&ԨSZ&}֠IrgTRnd5skCMϭmȭ-~C!S[D⠁ XҀ2rG)yپK Pc2&-c `kMwO QfxX $ g)(03`CPL$rg. RQ0te~W`A2=hח!M*9vYS4Ȫmlî.0UV\5r<#mD/RVgaQAx4"gMnm9F<)'hkI `UVHܥ˙FqZ3)OW,?Z6Ƌ< Ȏ2p.av&$,j)XhFJՋt)(X'%Pے.cp'$Bh CޔC|@]M jRPWQWxDL<3Q C* sP(GZڤ%!6Ԕom|mcPCϫkG%DCuSoyAFV,Aǰp0>,G?tdi{jWbVKG)R":(D璇-È6[bCv4j x*K])szeRÙ+Wh=non?SO~bSC-ZEJ#zUmLqS3A-GQ܂z՚צV7f *gq[esEVq؊RW8.Fҷrn*m݀?yj֫icq #afG.bl Ujı(B'Q۰RyDRԳ? bsWwE<AeirSQ&o͋OvX MP`#eXmPZo)Uۼ5n#uֽh`\1)Ck k2XQ"(_@IH&zEj6Sߺn ]$ACv7 ވ9Ր[-S$w,NG8dЎi:&5/f i6eoĜHm&\jD5 Ɋ ko5I%F=d.:i$CmV|z`I!dըE\|dV :4HLJӂ;W޵m+==+/ߏ!7I M Im|⵷7PC]ʒAڲDg­`EQFNyE^j{&fNXd1љ!shK\s&eY]1Ma\sEy,"%*S6$"ʦZ! VSdfLSr30i!҆/jv˷fe6H;"BV 7,%`4Je朓9%F̄4HF|f7GG-ʩyvaF1h F \srP>4.T9#5Tk$2.wDjG I@@FrY ,l2cyJaC a< wĐ'M3,wT9fyȥ<,Ve6yF<(x5\ `Ș` XJZ~Q:3 ",Z/Taɠ,t4RwU\W:\wLHuyܘ'K&`7R]U h+K D;o(Rn koFOՎ8x2o}umAuV`w/ۗ_\_*}r)Wn7p4 ;w;ҬJ}Brnvsܵ^1`,†/~ /қp {Y2wBP ş/s!mnY\LNa' C/ qxtM' ōI7zbk &0:, z b,!}Ma gDC\i"̒u9]vQ7.G3QHzjT XƱ?i_@/@A?Beh m+wRntc-Q;5F<ȁt n0>%>XEuŨ dt5UY2qƃLKx^;\yWe;g;ۥ"'!M0,JjuΛr.;$s2d 4Ι$L%Q& 81IF-jf-seR*w)t/}%J#ЙrU+?8~4Sψ'kCOQ$,(4%rA2ED8Kxo~>B'(` ?mZu!vx9(㉟7{e Nv>Px*i ox6V`-"*>Z tujOzWDOd m8 ǗO zRsKK.*`#c'#A9ێ ,-Љ?ߞ>]Lfg" ǝٴsճ;?}zru}_4@!H?a |=}6W˜g~F 5w՛,~{@_7"n(}Rni(b|؟' n\ן5`+h1hv-K+zr~b'Y0hѫfS 78>[]3sM'әyOyť g/ ӿ6Ep6YLdag??4?Iqʅ/O̵IrCۗ&ӧ0??FrһbϞ7^\~Q ! 1~'?Ԙ!. m:oq?8ճ5 [hxsٜ4?@( ?}A{ e)򵯧&@) _:0M?hO!$§| _//rq\Л܆+s̸ɒ*_/$Vxrz_>SSZg]X]TFa3KT&Lnqd#a#NsnoNn/y~Y X#m_/,'|LYֹ.M*dx~h/ u"(sh5rs p.'RLpO&/V#YMWV7a~(TL2[nl9-dL[y4[fll q{H3qbJAl% CTK#R̖: ͖*Uy,T~ٞ6 )]rtv;$Wgܳ"y0ZQ<$w(u"ZQܨEV٤BFTHFBH 5cAmODz7Jxc'=G?yJ cLjoLQ?ޝp?F1'/˄QRno{Jo5LYEq!!^NDʙ4k5bex+p,'P̂مKmt/j>˗8^($:KDB%,Y1щ*%Άv;$ TgT2U*hJ!`.1<LDƤulS̺(wu*%?_^Ǖu{ Bm+u"Ĥm:hlKXUa0R mOaYJHHE9r2Z\$aK6 I*qKȵ%'y0#T!Lp(pEi!6Ĵъ,)c#8eDEDNc2 D1H2BUߟF~-vvwhf6Dry_4,` 2qw(3X314yWzb̐i(Q=EV!o!sDx6Ϟ?=on&`ҳJx6 ߓϜ+JD\]}Zs踐\SlKbgohN ;XgJ;}7+Ar4dO^.8݇u}R4A? 5tm:"ԍ㩊DoK)&r7C'#de[57+n9PlFH!sZ(6Np_7cu4@=u8ͤ,{B30rJR6=c4«O'02A>K捬J*1Ƅ7_dwFHfhQX4@i2kao{n_>x.GS 8ׇt{3d'G7]:ہ(Nցa"%9C#|.O޵ƕ#"-"Yq^6 H6YbԒwg)v><7=6j>,~U,Y7e8]ԅA91ٵP4M1r)/#9̆B(ؼum[wAV{WV7//&1_K|qAJdFSjE4u\79 jEVOVyTIv: :_syLpfp&7Ga˻_u()c/ėχ}soۼ77aW ?}܌A8ui~:s!&n5_̆<3H֬ex?jx;F?MMng'J81wX HB~pm)|#Hq֩7vkŠDtZǨnS6vkhvBB~pmNZMuugnN5mSW%n-nCH./e2bdoW羢7rAQ#9{ɘ4y3Rׄ39iReFkJ LSNB6#<5QSIR&-)I@ U>ssn{*wf p{qswi|b[M.6Lxw]^_ЗYܯHa2ۇj{g`䯴@38S[ޟf0 ҂K2?0f"x(T)U%=~HZ jz >/7|0S"OD/|a9](fpe>B+|PBN }.==9O1P(t G-l5BOőBkgz(I:p|QA"K%[S>:Fx ^"*EdQ:$Jp͡a^2&P4.P M1r OII!e"}5qq ;v3؆a#E#]wHu+r`QBtרQS{kn%Hz+Oh"/ 3WAz:ϴhf\[q3qa9 "jNs9("\ce `MYQK VNzBkiDc4j\1e^922 A`"O>\qM3vOOd P|Fra1a:.P?i$k\ŵdV3hTp4#` Vsvwݖ H)g9잤ЫzM@JS| 01WreQcMYuEL ]V!+\tR ]ʵtpF!R  @Q.2%ⵕFdRD mPYS v2nn8j|s{C*AR ,>Y)q=gQ&Ha9*2eN ?YR_??3Gubń+Yi$wpj\VJAʱ\!y*cv4P\QiTJ M$l ik[֩>/` YV^8@| 0)8lkj_w?՝bQcz+E)IW"9~+]}\m~ϫXyru|+hdɮʆ|l.ye':h1^}Wۛ+_U  rg/;+,ºg%CnD Ǭ//Tp>l}_yނ,AS/tPrsڀN38ނ8#hbN+Jʛ Y| f -3`2&Zf& -!X`#.95DP6D-%lDl 4Y:بpH1xIFGҲRđ3ZT2Ԩc@̖4*Zo[45KѨ`69Fߜ bzE5+s'㾻N.7J*bzx .yG'H ȥPfIM`>n*b N P0.ށewEO;[>ŤcvOOd VQ:Z U֜_i%[ o~4O_5܊V f霈WNq,)bX6cQ%+\ G\8pu5_ M2`8xX#z bY!,R !WV_W~~UR/]}m%oMM,y)i'ZQ<n8JW SdjF(*"e2NNʼn$f)M̈m1UTѥp'"kAvX3OlWuf7lʼ?3~'fr =p?fO;kV0PPWޯ\j=} x: > _Hk_r$^w$5Ok~lM@r: ېTG /<AQ $5oϲi-Y kрB6,5\X$!MJɰ0V&,KQ(%Y)(&0镇8 IWH:!FKxjiGqc;Zpe(IJJ!QBR3 ې(<$ez^5IqU|lt;hf/CSu$_hv>R $Q=)f>)xi-QJ! 4#C9ہ Yb`j"񖈃izo#.FA4x1B[')ԸW\&$ds"^lҐ;;1ٛΟ÷^3fFݶ-u?(D*8I?2&|l~KhS(rk V4:Ӊm%2 Qv̼D!̼1,pwܓaD˓kvZ,i DO0Ol6-[EO;ퟧ_"/i{c×IR,gYxOb?sm1ʙʸYفamzц2nU+s;p06wLǿlPrs,Lz(5H)E;8o1wF4 ljYIleLH#pb33c9}9yO0_+vǼ]?MpkP"Lƙ6)$Q"Ims)4,42&Y Ā5CrZJ&};u`K='HibbM86n+pTYNM(wϋH"$R]@:Q<<@l ,O~A~v6O]vʸg]  d ƨ=LDZӛva1GTu1"]E=3fduy4$QuȦ(hØ!mu8DO!Ӌ:-w3m/0RԮbwZвyuURݮ<UXTQYwvM4@X!XkBDĭ*"em)@˫2v픐vk"_q^ -TNpz깹!gvQ㏮cg3;k]Fw}[7'O%-ueJ:dgqaw\YER,w]1E_,Y?x,~>|Mb3;Cc??L{G'wkrf맷?f'v=eяo]߷Ao:Ӟ~xx _relzN7,O5?y'$F \S@3}.z:"5D K43L*"T,SqXZNdMy>y4;d>6oa׋ƒdlt?zD9F6Me!V])\7JQ`’]=.6yȟGxp= }Q_IN{ OsY&NWIK,?qXϹi֌wTsWGlWm}4\ڭ&iOw* ׍Y.Qj;3nS' [fO0]Skr(N -[N)rKA;@xTSԝq8[QIl޹u.ģx Ϧ[A\#-OgC0{݅D"8gDӥc%Fo5Cq "{ ج/'ɟ~4<2eB4̈́ -$Z'XA%&<,  XMJ t&VЎVuĢXv A%f K4cLbÀ$zih1SR"!VȭۖWcV ыuQLۮU C=6P߱" QtS:Q!*k %9jGH8"-g+V&T1KpK&NIlB(gfv)rLP[pj5i7*ZĸvX[0(PL[Ѩ+Ng1nq+&Mk_#re2/Zێ'X JD-J[6gmԄ/BΑ@8lg Ҙ:nδCoT|jty̷,.. b3ձxQν?Gr*߁-xh]+S" ?5nvpwv9b xfpWck,a#9Wn#G{OF7=^]GOpeTq;<)ߋMo3 ݇N[\ ^np;pxnu,8X80 j32B(ɫ0:_׈./yP.kMrse5U}CkPZ3Y:|D .C2xi)_@4\k.TKЧ=}?LSb5]hl ε@OwI׮y_2kدMWlDl/^m I%g$+ ju8 ^Qɉ)P R%Ҋm[yX'$Xxik5^ΪƜi5>*^`I0i$A tqb# 8&YƄy`LJf_8PX|IN; |;d;wV[SV}99Ic$Zc*`c-Q$JP S!5Im$T&[ǀ 1OUp[ ]ޠ1<F&h6?t_t%ӭځԝNF+Fq>H Lv I UOWMqʨwD\%` N#Urg9ˁnKrr =:}CpG-<AL'Nu"3i][5Nl3$1Rȡ"5:a(F9B0/5^2%~ja'h0tg#gEV.,dlD?fAvj^ }xw{05[C6uk.tzHU[`BneH7.!2%:-&iޕ6$"ewC9?,vٝYdVYWr\IQ-QdR^,J͗MD;h ƣ@MyTu!!p)Bћqk70|߂vA6qkXS՟v&vBB"!S !B∿˃&m%7MhSօEtGTC=FTqFEj"HނfHuuyQO@ۊ|/~viE]k,J8)vsrWD7 WǍc&k"As#tº&rd5Mj xCGRQZIP#~gdy}~/'|84'8aX]S;w8_2R[pFK{zk4#ҕ}FOfWCa @Z|eg5]f~joG,w|22FE/?Ű佫cخlȆ1GZ~V_`x+mRץy-kn#E렠~(Qi~>|]b/0K4"De2a?ϥpÁ_oK?DǽO3>4SgF4!űX*-ƩB % %4}ȧ5tdw?⩢ooMșް=P տf$֝ʹ<:].S͗o׫JJt -D T^{ ).x~EEꔼDX$L|gkBP^2Fߖ%* n)q$.E1Fd 3m9E?ɸ ) Uq7F/Di\[jR۽VH: D'Z,.l+[ U=)v7<`?1FtD"J]sk:'cc_^FL&%a11,yB/)Vӥly7Q9cLNܵ8ђ\cALslD,栰LSBw_;B]x}g $'Vn^8גQ3h$0$ȫFb4K3bXȷٽ'NZCYDR>C QA2,J R$V)aH+ìR H1IvnTCj^LmP vE@hr[-ĺԧ, hЄ4MqǮM7L&2d(R1Z)2k1N@A%IjFra()u%;# GFQf_b &ȹ8\C$ Sky8K\}06>F{K~̢} GFi-6)@r( T@18؏}7 #Gw "Ji~A߰>k S rF?z3DbwܜQFԫW^=4HWq>?Ҹ\Kt`<A,pG+ G9v>I?tӌɄ2aʈS,Fl_ERp=j'5J(Fَق(0GO/ODcK^M|ѵsS\`T2>VkϒM\77.k4EGEY2o65gX7ZCPRmR?" #ҤHTDvJlD?u'suA=c9}j>.ILܰu`q`.Wv 5D0bfuz [ncJl?n4Fh~/~.j7?NVހHWigYzDVMO=\bsoA|7#jƕ0Sf^ U6tG@|DSq+^DI[shy)lAn+w@w~5}Tz m$6jNJS==U=RԮL Y36Y;%j񪾷K8,힛G>ރ/Y 1h^SW9SdEYOqD{m"w*P29 FbШԮcp8O- Uu85vf,u Z- dvًJ>֋ZrI)]S#"IߥS/EaMQ)FV 0_<ݸVd~7R0a`F fAUH/+f~}5Šҩė ,JgL`8->(q>bJ<\@x9NUhBI$:hNٻ,;"ЂF(*WWdtrZs@,9#"(S~+4uU[+}tj_w!TkfneS !*qO{W@x\<*2d}?(Mid}\ۙY>/ZU5 \(!, yXI zRS@jU{ ikRj-к:_KI:ux5Ϙpra0 r JbٿӶ@Ж<ލgʓ GYܓX)>^ Nf_DjU:f?H2EX *t=5MwkIջO 22} 'Ԇ->D[eS,'hpt{._҃EFᜅ&^ل|ah& kls:Hqj 4f%˸rz>)P&T@Ycᨉ4LmЌ^cGJҊ!ׅ 5bg Q9M.ȶNx Q|s"&%;ӽ3a/!FH>rŰJFjSGq |w|29/cccܰjAfIYȫP Pp4 n!Ín>gLJMyə$ٕf5d4><RcџޝN@4US IX>L$M8`mɫ2w$8γ?r͍oGr0Z6ܡk*g*buŹI)%Zh.ŗVX(n~ȴ{\SZY;rbNDx ͥuݣB(XT* "LY,j[)b ~a _gta^ad]Th.$(Y3, V`)-p& d+VYLR^0籬꫄X|= ,Kk;նޡ6wwTLae|է4'BpBBܛq|t:O$ʍmI:s%Guo_S<(sяӓyejNFD+ԋͰu=te2Q.G',Hekw$a^`0#%Dgݻ.\̗o$)̉ !ꗧlz>T-BBmѐ1NEɾں:v#Xv-r}Kx!'`S{u<:cjS)EhCN9Y~ww> p`)Ι >J@pV b-.PНn9WP$U~XTIʘƕ 얌1ٟt2tWG-JPjMjw E#Ez yǒZC$t0'"PcPW BkG;]am̑ު  AAftQ%p #kEy,i8~UGZPoHscXc<YY /K"qF9^&b'AY³sp96{]u?%!Rɾ(tIy=J]@/gB1v#l9!RdYj(R.+&7g^s9lj^Kv'9[_UrٶUfƜOz;/P\t t-}pzoD`BGf'7j>DU<*CߝHX_ELxQB;->X-|*4 7ڨrqF@GQ5ua*mkiyNC!p7ǝc vEv"I9pt8PםXlZ<) 5B_}*>P,ypuxi""ج%}YO6$}A4t8Z\___O(r}Τ z+P83lU͡!q pySY<9dчU\=.AafxXv_1_.-D0Ɠ ? q<bP,й{m:tdQ2Oo}x*ÛH` Q龳xo {;,1tp{:iB~v r2 dPD7^?^a,ҋĮ]<3=/^aYRB5-ϝEMEqOﯥ8 htxTh6K>gBO+@9t;04q4yB 48A)Dj1q=ìR H1I$Fv{'_]ms6+*m(eak<ںdg*%AVFI8%˔DZ@Rl*Htэn46䮹xa΂f`QM_>*>y-%|LŶ N! /y^36g8E`eݧBgr8X0!݁p@DmJN$W\9Huk޵ڽh HNlBuZ&Ov~WL{Zaa|;ҀEsW|Ϯ.""A$Y#p6;?bU-‡FI7ſߌb?hq&^<{7XVQ˯W'Kg_>+P~/ʎAIeA\b/)^j V,Њl *נ f@+4D4:+ 'iEEU%?{0Vs6)6)^ '@*7)Urk/ꧤ, IEi%I|<0!X#\Sy[*,ݝ]`cgU( Ud8ƭT&VXkHTs O IB@)F8,-OƨBkZSrg#[xg|`߾7%u_u?7i|;U 9&wZY]{uX.ƯmN] _3N&_ N˰@k%E0As HrKEiEB:f 8NLC;]EF #0¸RmM ڛX(#&ZC =B=Vtd֩U[Z!-3Fˇ0)I1SksP斁6K3#1 ̄I*EBX HX`N}cL'VXJV+[s$Cs$?0T/!)=2)SpQ g9? ƒJH!&ڙh"gbgf[O(]>eORm͑ ͑h4P*‡)eNq O$$ҙD'T\QJqv&-ݣ1ޮ~/K=|YW;psvPΏ.uKumPPH:acY1;@Q!KIsn0,>=5P_Tc9] `>wkr<'J?'O 㙒vi:*؃imoRO'E'ꕣݓ~qqr,`*`qϛFZwQaR~sJoVG||K\vԼv y&xW8ȥ r-g`2dk&)=ڐߕ  -ݺ>ІDUІmXh9:'hC*hC~W6lx$ݕۅ:A2{O?k.nګScfս;uf[b6q_ix;YϹbA2 _wFz1-WB~du9!A^~Kt0ص0- |.?ߪpv;rQ4lMj6_O?Oe`w,}gή$V~1,^]!6.!t jP 0_Vfd02F iԵP Gk2b--Hh[O^r l)wI V+߫Ńv]`]C~dgpᡱeۤ#`}*c JSEP_y`*adr"g#ôJn[W3\Q X{hO_<_s6,쒉>jg58gW}F3bc_قۜxyi&ggn۾-id2Uh$m4 ë1ZCsJzhߋa[t2RY8ko>9ɣ9NѢupz=%8mLS'S;:Dr9N^OLt BkNҼiB Hq 7 i.XB[pK;Nt '-M12/N[ۊK@Ag_[NUm mv'ؾըQA /Y/FXM b5#/${s(nOog Pמpz⽬:"$9Xdqˏ@%cSĝ)Bн"nV.Lp;3f|vWkzXdï.W׮W+JKO檍XZ_p{ A[V2_Eo%5\Ů _孔"ˈ7Tzʱͩqr.ψB"Ir%"1+M렝 Ѝ>[Ny Gb<2eR%s2mV&pL;45*@;L̵D@,f[%cJtA9t5m&`_eܒ9)̳,eJdJH_ u@w9&Do;3 j'ny:Ct ]2>̊n~ @1n& i Ib3yҌiCp `x澄Ÿq*$ dBV%&G@p2wr;5,f<.(x8Yw[igC-ޑO+<_%}=X=,o6VaM{x|(2=~s`ȌGg 's`>!G" |/w!t _ I!%(SXKʼnU]_(h;`agGf ,s8EDi4j,sJe"E0"8b*ʸ W{;{skO'x(QhBF;@2<:FiBF\ aUsw7Oc#+; H ֺ/bI9>ؼLAsjbi1.3p:7cU*6FVBx*ܢ\[˽i7b:T2:q6E T8lH`ZI v; tY3]w @@Ȗ}op|X\];@VPD7yXy=%ht_? 6'$0%h(s}PEM/"y93^ށlq$+Vhj ~h}w-iKӦ!9-?|N{ҟmUG4ptHɫ^V-%b+pwt3XP=ܬ%\aa,|]+6:l~O#K'%9*eֻ{w_H>ȃNG%Xn' 1Q!W-%lW|̈́Y/QB) g*DRkA$a{;SzgB,܀:SJ{=Vgg<ڋ>Ҝ}px!)MW+"M4h+%dXG-]Y-,OK&w"1kHy,d@#Q6_pN8[:E"(Qi}Qoj=-)1ͧ&w!IaD_tE4{Pv>aQs"'v$YWBF@='Kr J:Z7l>ʿy˲ |t^h ,!6vg(SBX 3˶6P͜E[l+ Ġd]׌'_K~8DB,N,%ڹ^'PJOJ_Įӈn)У؃ 3VR (OeO?jhH"[@6{ ICņnjߤmlmkJU$Ci@d4Vuu-?]ưRP-w]Qj7mY.WsSC=+fӂesU&:Z> F{aY+@ 폈o{eߛ-l-* HA3 e8ITd$*2D)O,P?ɼP)q߉DwEy9UL=D|nSTєc$f$,kW躬_ÛA:ǃ lwTyԥ-|hG4vf^)epyOCAc^XkkA1;o\>J?fXcf8Ռ__7&۽_e1>).3֣͡验ʄA~[Xb)̥i;fp;Oz7#fqv;DI a5keOaTӗɴp|g/vߙq5fuk+0"Bȶ<1z#P :,ݙ,TLg"/7)U>FQ sɞr, C!<# p 5 ,.J^iɔHkZS27'BƳ(5wjܼ:e 5z|ߑd=ͨ"=mL) JAH0JPD>Ix8^5ES}Bb% &)LpE199զ)Ui*Pn:K?bp6ZgHR5>=*zyQ,,n.CSH!gD" 08,pd#@e.@$CZvMG/LHr(' &Z,Icj4V+Nkū8\S;õXTW~#Z, Ye+u.l<e#?&FD= ('6L.xم\TSȖ[1AdW;y֙ݎXYXysGd0X98+&K\ܲB؍!sƞ1oV{ȘF{l͝ 8ָWlcߩt wփڱx +q,{yg0;$ qD qJ;?2)!AP@.&2(ĉ+& h۶>/ڊI@z% n/FKctQ2GYč 0u<Άq;ngJ#W-n'~Iu@Ӝxy9,S%0[֜ |࿴I7wuhR?'mWzZr3͍ lt!!w WVE=mZ9o y 0bV?uO\ 4GQ:x;?YIwP>y6#WJ}2(5Ҭ,zأ t2LùғL/A_~;YR4s4J>V/2o&t2YooCԞvG] }i&}C%$"N]d̹Z:r^ |:VneC31h7 ݍo B9:܍/;1ϊM:Y]!& LI$8, T*BAcLIbȔB*c$.dJABi%w4WӪC{<@&gЗB'0˰PI'NALL:LL" Q)SI;s,vZ=TY Vf^56,½K@ )gw|_zNKpX?p`~08Pjś_BфNmRV=n(#^(s(fçLQ1]RQLO`G Ooh!TLߪ8aJjpTP+!&?8U 5S̽ۃ()$ߙ~-JůDyP qnq]`mYy7jϛ"rS<;"4}/;oΧe3"o|+ kqOo_= 1nI "Z_w^y,VB9U7]rexyjs$?sEu E LpcYxв xc?+[#a$Y8bP ^kJS!mFI|N}." ".Q0gQw,Tё96b_~=Zƶڒql](h Ҝ%y;%iD KӗyڂWp)Sςbc{5wPZKOҼU}^v _-/PV*xdׯ<# ٯ4 i ;{ЀyqF{fV'^a=0E%EV3oYlf|Paz 9'x~=%멯M~zn/a'vIoS"Iۙw4r:%re&e]σŗlUy+{vAj-3%`5Ae l6'9a< eup?[6$oVe.+iõC,X>SZ!g#soI(jY+/v(@&m_!BD(cd?Yuy$ xyrm-ZW1`ᵫ1_R:^JJoa65kPY ~fP\cSȱ"^QliVEB+c48D8Bc];` r1Buw]"~`Je[]|>S:u{nw a0cWW|֥aѭXPa]57 NY*\_P"UUPᷓOH6,' lTBq: Z:e T>@ yKwX}2'@2w9-9dW3ʅ@N+-/Z yڷ/=QP]JVSgB !T phM )R"X"~@OVvj1ێ?#໯GͺVоsIb.]і.B8[v܃ j]ڙ=-$ä%E'SN3Ex{QT(E9u~JN* Cz*.\T^嶵8@sxW|\4wYvDkۉW14~l6]jr(+ǚ;,Y W$ZXFv"NOa^Wp!FHISP&w `g*4OA]|F$Q"N̮htdgZIz-̈́U-ف,V:{~ͮ^7˺׫`߶vuGRA9]˹U C=q9!?M.'!>A"yqC&))j!np`1}VE[l_ΜfbTkuVbW"uyp =(W.^1OfMj6g"hX}0x\62s/ cx'#^Z/@R'~׏!DzwKd5%CJn[5SHsr3E2=PJS*@i44`j>MJfd"*#($")0& IA5Ӂ6H HIVVJ[d? ſCSXSX vȡB Ҝ")2wNZAݛv& L5٩cA*mPzRo}K!0zv2`Q2vU+ hXj<&>QwMqXu4-3]3j2kV"D0o=xcv95^\qH ;_@igpA(*gɯduD&wd D xl2mV YCϬD9sHΪ$ȟY3N GyjuGw'}#/mKe ãG>Y61M0՛sXj`-; %WoLR#8>ޕ6r$Bew}{a ذwE:od*TŪ(6n#3#ȸ+Sv`3\<nFl`Tܾ]+464t_D؝FN}0v@af͖:rP Al</CtmWo>wK/BąFˡ/?hUr]h fА nV@ޕ&{QȽ0 ɤ(8&SԖ1P)MRRJ[2j(-OQ1:.QۀI YG #ڍAG /ݍh܂CPeJ2C!Ol] ѰϦ=p1)@>3V%쳁WiH!W7ZG6$O Cy9VPml'4#'#ʏ/\8'"tquY"O{VjtIڅtHOWwRĿ[3t,77ӣ91p3fX<ףQhGO?qW|I~?h ͜Tr(H_/]k=Q󇊼LWz7ӣ{,IM8KlM!B\9bL)!q6c'L4"nV7wf4 ,sxb^ޔOϖ+5ܰClN.T[snzǾG$TZ49 W8YL^8]\/q2;)<kS) br4FFT}HU-d!J]]?_YFଙE^{(^}: 3{?KopPO{JtXKAa@c6SS+$ly@ Aߍ5SD^*.Ѽ~e:`[ ~ ?( 1( s[N6,Ta%BF._AJ [5 j2tS Q'(N짩of˨whk.{5֮zy~,۝<ק r5ܹ|תjQ CQC!װ+ٲV 06t(~L6DDl3TI!{VJzփ_@^ubefQmH;ܠuF`Wkf|kofo},r,)lrU#7̊[LV*iȿ(KWF e0deKXJ%bie齒V` da6#*'ɝԣ~ZF,mHcQ9ZڤNI<Ƣ, Diya~|oފX[K06WZrķVf,lβ:BpuR8=]iJ<:@ z/+uR]'&0ZX1@+?8)')¿響/rƄݢ/4#qrת?ֻl>v J.0 pZ avRƠ+PXH@t=F4K*^^ˮZ`dWvBcDik5h{D,ZxY)kT9I,)ViM=GB. trEYT`sU/4V0@D)R1jh6!B;l,/~#ןb# ~Km01bX7A9 IWݰD~vt/gY4ϲhe҉r V)t5.;uS٩KPv*;xs9!3(75mDyiVm/'bs4F ^vZDzS8@ !a΄VC}AOH/gmA^w oX~׿f= Av! DBw bJ\.&ijT7XdBo]̕4 &#[lKea/Z@&ս;nĻײݝW'*.yTxXwa#CVE`r3yHSBР9wp-o  KI~,KtFP;3,عӞye[]zb>Nٗ㧿Z[]\?>ΖүmV ڷfsb8>o H0˰+LHT(-֬@Pb0X3ǽz5dJkkb-'tZ T\Ly5IBŶ bq{y=|OCB+PF_hQB!=EA˼qG瓪lAϯk8/?&sZS_F!bv3[hO l~[so6k{ӈ$,ӡ+%_pSx-9=wVl > `5 $:D`:c@ NbWpg89!iC°;8~H+m 8k3/DVf?( M'ekyr𗻂SwI 2 ?|B70?iq?"1N&*Zm iC4>+n5MiIRll?N .|V%Df1JQ,!y+!dt'.*HSRh !d暈Ritce)Ԙu4+Sfld )YA \xz ɐ-рN2.khHChGe@Z$M u}?2D,0v ֠>Mxuf'1R}pI{H[!N:TlD!+ mJ9ͱS*,;twgC4ܤ=}zon -%}bO}?ҩ3 WG9ܿ#Y-!Wf^ ,Uosrhad4E1oH;@~N'W}dIO=~{tq7aL<M|;5p0#2eφ#}U@*htuo./iLQ&Ɨj`HyUc" Fo_-ݹ>z>LqX[*":EY-"2jDFBQ1vRgewV å8HjVc *a5 -KnuH$KeV / [k+Bz-Bz$3j9^F+8W:giIl!{ޮ֧sÆbSu`n7>i~dv5mx~T<2ĨlbƘ(%V Z)Hj$Kƣ*H.^2,i>x .Buݼ~ dlav O%iEXҊ|$ cKvJT㲒E x)m1)Ñ$|1Q)MJ<If$]`Iѷ)*>II IY8-b@&5x(,۰+[+ͼ5 ~']0H`uBj-U FI|6)dK36vfQjMn].;YL |'oW|'Ɩ4PٝC#p6o.ϓ.ο@lHSp$"k]֮>{$2R0YUKDN.iF*8hm.OYѥBMR>FX?5t̯3)u|~kj=*dz3K┢ףpbIS ůG'8q"\~ */SX<,N^f0)_/]KL~`/ܦCE^YՁ#G X!+Y4?4</muLR;SP&) :a.:n?͔ChlzU/m[ӽ,5M3|+B_s }rGunQrD1QU\`b~ن7W-P-\ӛ+` ~[0XXP,zj_*1Zʂyn.yfj,3[M|DLVd7=wsRABFFk4>XՒ{ȘT2S!R!XEH/kY'i#$pF'KDD8drQ$A*)5HK)>\YloNV(A6&SiR?ޕ5q+˭䜡/CrRyI `$&pcn gDaeY$fFwckrs3CޥITTGۯP\ʀbſU` @o&O"ɬD>Bh謹ː<]1D/}C9P D1c>B謹ːkN楲2vD0ԠHgGi>y ANKeg p10];u@e *A[ %[[,lضcr0[B^?v''#^PkUnm,1-8-V~hUޛv0>ЋǿfH"EEyZXRUloIU{ZU}`T5ty:a0c$mΜk`mIp %kUlс*J?ֱ^ NZOqf(Սw:ft,):W('-2zuaFҭ{bQƐhɨ+eT#FU?pogW<]ZB->lG~cw/ut=Ap:x+thcQ B[s@i놢l|gjpԼy8pt P~Y{l{iwnPi ^/75Xy = E})⻖2TFJ)a8'$"q 5Z Βy-9\cqsQEL9{[UCrq,H{GnRAjT1%\ө1]j|!`ѪPWcvLw)-,%L eu3g5>m…hU&W`sja_mJM{7:ߦ5 W$}oyf!GN.,1W !YݎCqG{Ke B=_gT#<4ExYټ;^8e 9Vݸ`ݲԺ]s҇6Ȥgd+<&ّњ4UtJRtu܏cP׎˖܉񮢾S8w§TĠPXG)ržInqRyu43<6g0[UJcZ2ݴ𣙔jXF?VΫH#5C$6$J!MH3Y"ܪ b"P%@ AJKim_"+9_F5if>!{.0uk5 =ɏټAv8!m#Xh9Z*"i1bji5̽k- #_JX~`"3qlGer}:i,KC"ߙ僟/W}3vIOCQ|_wD 'T3k#)`P";(SC?(HiSW}3yB$Ӳ=803'LKKlQG*Iڕ!VrT6 uٻoǣ6m X!ybu#G?=!u]b!s%>_[8,v;P#bnl>.'Pj'W?6`n3_/Ch=:tWh^ˁS.O40[|s 0޷R`m}2s{LWGmRKl8~k/"_Mp7mqw`\Z:Kɸ %FxKFi#Hsk5Inz?ּKyz{R\ ֒VLH,ff\ѪB2m𖪯 .fH.[kx,61uı* R'b5X2/ P h˜ %le.q>ŸWH;1挬a f._]ҟW!q3&\ӻ7Ucھ a˛kLh5Qp8AI% Og}b b$!T땀>'U bce1s$9&1ƚxښ1'2` 9pXO^50֤9O$0|!j:XLny|3p=˛rÜr gYL">=|~"W {K<|ރ(%–? ez!Ho9=^ l򦎢ό]FP>K@WHCL||w/i?\AR"_BBޛ *CFLD L7u% M%) <>6M43^Ϩd}$5 X1b(b*TAWZf#a 4]{eB9ǣڇ`wF򳉁AmFC6TrR/n_]|_.ɋKLdBI.Bg70Nz%#o(YHhD}2&( ,9蛴yioCuuVGS<.z$XL2F&62Jp6[ ,wy)[?[ Yp` j*q!1NbbG,FY+>BP&i.ȧ9ň (N<'آviD V CaH/c-Tk!X&xObk-VZ+ _lL- п{a0TLȝn{;H݂=fcd+~ C#kWz+ZHe]W)k=Jf$ !yڶT" ݚQ`bcIM@4I0&O*U;*\01a9W T$b3/ISov+oCV-d`ͼ̩" aRw(+U䌰 0}.,(b;+OTe9ͫ|8+ 4B65|#g'ϑL.- Cg6{6H7^}L{n<}L j|i8b04x;kW(Fu*(!; yr=ut>!Ow9+v+5,?y3>PrOJMͨɱĶ08CXG QŞF_⻁` (YR{79ҭӸ[0*kJy/Zp[P/yb Ww/hKߡItIݩJ} |K5piiQ4 IWG#]ӯ5%O+bB+ˉ֮'rGI8 K㬏L=]613N$VZ-OR"Yˆ\H'aqeS Lq*o DL hgwkuhׯ~xu8II tOw8/W?%֘Ⱥ3/=v&97Q4%(Z٣h%A2wV|ޣh*E(,8GBNcQCh C@SpHPQb\1CpWZJI͔10J+M6PåR(ITlipHk扣 ݔH!V2gi/"v`BYem; 5%RR; =p Ez@ J@|#}UAF.\$FDBm!5.!8;bِ5eA ycb\P׊ 'Y7lٯǤ _`d?a B> %Y4զLGd?*0Zws3 a@> M QMUocO\@|w;f6܎a9ʝ y(g;xx!mm μG dd֐Fzu'q%#L-؏fK8Ek_\fK.8d鼒$} iu85P ߥOO38dñ ~B*HMxŕ,.f52봠\5@_TJhKێ\܆f?v: 2r_@G0be81<(6Fq41#LC5RaANjE.Z*-Иvb,X TQBoV +K{B2f6k/P%Ì~Yl11`fA1g1Xr][oG+ ~1Y;,vw)PWkYҐ"}O&]MT.w\$;Vb*nX608` a)i (1aFPlڙ&8{&UB*H6UE;HH7 X 0"J a)I(khDJzh2sSJaIFj \sߵixŭL,y!PFoeJ6@*iJ"'ɂλ~^1ƈ/Y2HbiAQ9 HBV1s#4o Vr8,/'X9)/3&$*_ ;Tfl;S2-iR]\7RrtO Xm5'i4!AddK21BrVMVFu)~ e1 QGN_ `+|"%PG03KL4R5i ).qBI-|+l˄QPŲK۵'6^FA(@ `!*9ē3t#KDwS-(R-hhgI тzA[-,zAV*'m ki}03.s*û< Xa(`x5339o{kUn`YڑȲ\ՀX=d5˷%%ȷ-Xàٷ5[un[ɸhٸjDTgI3;Jј;exd/5qj{|f.i}0O>)aj![^[{K }X'^Z* Lii'R_-ffItt,+ %}++Ra7Bec(:? oF7rjKH/4I%.?DT? =>2֙>] O6G D=~Tk(q}HLO_q_(%I7PAq/T4.1 ʍX)*#PB ïnv OxX틘uT7 T_$od_w%(:KіJR/Cі:z}>C WUBQ{Q\ QB#XDGD SBNpS=}2GRu81e3O TVvՍFe)2e6~NJl>A&VBzFSd`UJeYuKh)ꞕc2b,)1x+6'ɋ:BRȉl+x7e?(@g)ET{i(ϕ \z+^R&E5:o Fu%ƷAa|gGd$xٖII۰-G_9>o]nd0⍑)MEX|!ڇu{SU橯~'=טf܎uW c?zhs>CG$j8z$ЃM@1XT.PRs@\|uk QV1(s~;8\jpYǐjJ m K~6[p6Iۙ=2\ K`W ɹlc_2jet9ϩ 9⸋~rrۜMz&h{; Ԣ2ko]]~9未E4UK0UOh._ uSn^SHˠ3I\2E@nlc:zЫ7/\nl//ԣ@BqŇ,tbV弱 BKvh|qA*V^HOI %JAչϷ]\#gaTb vոțnRjjO]݃ M 7ϳzr!WiN㙗w{f|4 WOǧٻ_,) & [%oo]oӈA~ENo| fGd͟Eq_KJ}û;y%qD5F 5rp6;uJ[^atL,\dsY߻=<<^0܏I[,BP)BĹ?1"sk3㫇fPS@Pu#*XIR #"zi?Tz*e!c0'Y„V=]?O\M-݆]RIΣf*9[7P :(> C}i '%Op,&|BGs&9Pc774Ue_~|[`!a}\e âZ#Gę#dd 11g(^ {5*TZq B1[5Gx,@q5E%~Z)0`f>nMcۧǥ[CȜDc(WHPI xf M Th-"9Ar˗4\guia.i҃KX{wS 7 8-0;@@+@Lb(] hQ3wžJ 6¯pzp5OOM3ɯߦQ3$:uU S'_c~OX..`\.N5fW)\>>,}8 _g?c(?JrS cBA3rp I cR<6=ν_ w{6 Vjo0~m}ޯL8Awsާkk7s^*XdR{DX`ڵVhbLpA ]o}~UpMs󋱦gB1s gB2s Ag"/&_L>[S3Yv<[z~1|b/V|b/gk~~15׷|bZO/ Oy@J=F<Ң8 $7aT& ҳid~q1Ŕ~1lb*TDGA{/.Ot O}\dMF o*1ȫ4  c+~Y{ÀZJ6@q/ρЧX"xRH!{ÚpWU]BT?uz-gS-2 UGe$urob2]8AbhuZ=;U(!ͩV[lvc%XlnR/Ңˋ``'fWh|u^L7=qmԫأū{_;c6ZW9՜ DOoN7Z!"kI䵨a7 &7DH90iMy7nNFwOWVŬEWݴeVL4a qo|8.~MOen4p4M}mlΈwڽx2eSKB;攧H#ҥy0M(d*,TU'`B0Ӥ^ r;se>Lk7l !OYlYZ\4N;ֱn L,MdM [yGаj4{eĘ|-%=-;wk!OYbd$WauAc붻E$S4Xuk!O9wucweĘ|cT:a[v9XS߭3u9.ܪ&p~9f-I1 z;1c,v嘟B01!1w9V5M>0njԼ1w9V5A1c91?3`J1cZM嘻b9VߜI(9*͗i4*/1 %^3}!ir2)ſ,z* %$akJ+(ic30|/35JA[Ct Z3CP:NhQ9O8DA BkEPB "H@04PdY|DZmGp , a 4 )X*坌\(pƱ0H,$ oHðqFd80I%FD !ThmUd^'x(xeQa~a<̹\U&Q(S2xiebÓ nx `0AcXD=X_*UTXh`0<&tL_|Q h-EƝKi/ჱ_uob\5*N|\fa p"JW0R \Z,sDrqcP_eÊsR:H[y !FQ&T˔`䂅q&PjjI5Z! 8 1'|'p!L[EYSQ`c GKe9jBcE7kqVk,5 ZddooUr7&qkf *bdx[,lxLKxd[ 6`hmEx,"\ iKK_J=XruZ]]i/gӱqBtQr^txgbqwg0FM|R7?~̱HcH\ޫ0>mp˫AD7 -o/{'Ӆy= S\V ^T߹0 OwF :D=UXHzvqyD_hFOf_5~撍+τOJPM !21q8B"l cP\;SRlL`3!էwfNO1褢" ğbOkp?ɇֻW)rz/W1rvҁkB˱'/XW1HIKX4)axÒmǵURX^|Oޟ c)h >RIlJI[w~ë{V%5}}*Z~ZHсϣV1B'!"M8RZ#yS EgVYc%߀'.!1:`DQ".h\::%u8jg yəGt4rM[u+BGJ# 2ZMu$&$x $z¢q ¹S{!J 0$ ͩ&UfTKc$BJ-T-H"%$#0 Kvi sVΠ*xՆ1rR! Q&0SNt @5N!պ/ \R;9s4;OՠrZ azua\8J 2(b#T qԘ+!H_s!qY fEb`4z;| ~KRմRb>agѧfT ~3CnBQ4À":hd|FbI7_!sp$x$0]WNeq!K w R-ş1ի_exm#p?!/i䪄hFPS`H֒P>,fWxlBMz~!?]ɤ}Mjq3I vE#ka-po W&X+ܟ[vPjM8ڕsg(EXݭJCΥΪ_MasMrL^; TsҲ;K-qj ڑ^<|1j&_)HOTj][2ΆGX:IZ^^l$y$]A|'- |?͆B5.м_NUGےGar $bnY"(xoJ2nv:Gݻ{#;Em]Q+iQU5M-%AzTsTZ3o8΁gP]*Q`ǴQ FAhE` QFU0׀vqΡ |_y]ӡ)MZtqNtzqnGʕUtWfl n.Yd?hû/^_ h? bwf)]RAEx?'%Wڕb%S.W8ƹ4Ct[uF"4rRk3=KNGeNhMƣ ]|:i+oqZ33aTU\ Y&ܬrLx.cp r7#Hj1uOȎJazfP).ե 1;OW(׵լ^n0ZV8]I)vwSRV.ϽkV< Ƭry/^~&&%}9vV[#v.E&kmtSo(O97F{ScI!c@\*IUf}ax48`A`ڇ~h(o9?>o.\smv?wgD Jvx9l1H&ܔMJk"-Hl^n*ucbfJ5'La) %yr`)z1cq-a1LaD辌W_ai 6-*Xhb||5oC1C$a7˖ۮqkƤ2DG f%LZ2FIiX% |IGZם$lt<W|,d^RK"- "}&&$<|͑lGж?{Wǭ}xcKl` def&hF z.{) xԯXU'wF6v9nB䭞=B$syk.`LAI9XmíTܥ\Mf?D*m ?W-3$hYL%Auk{+;[5m ^+വqN8j|5#scY)BX=ac\`S)mVZ\P# Z#v0r9 Ki4Lz4"D0?EB2 ȯTHҹi(6 ݝk#FP6F%UUk̡rNՋNԜrjfsCg^ˮ'&p\T.?V6P}5Uj/4D{s OF|:g҉w3 =s<cEe\vb4AVM `puTt7rQRB NF+t&_]><>&'~`;BTהv:ӷ I;Kϯ,ͥK]1oQ!U}*go_9ͥ1Vxz*ᱱW~~2/q~h_ԍ gK%'#hNp8T(X_Q 9K_n8'ƭx|vljTݣqwQm6s\[toZO~4cڀDRsB6]w]]#z͔H)o4Q2RIbJ*JJ 1rVe扴ݬ $iYrJR G۲ Q q@~E !0%3 Ar҅!:VTLa-\lfP.88z=@P(F-(Tx+nL+@iŽJ4+te<>` #\4w.^KCS U;Za򟼆;PMPNH{S~ S_H|M-z%I8Ĝgq_1L07ͷ(LOc|ԍ4a #ࣟl&wpu5ywQDGׯ!3$*|FWte 66(Cds80cB6PvIonp]!Thp 2gchcLU >njfSL=[?p:b/g׊bϙU#a wp&"3a2Le gz!H\)Js抶ƌдgj3\\xH ^[1eFFLgF f34bF53KLaFbR9gqՔp-ecf\vǘx4t<32֖͠r"{93J#s&ĀPL[Hn*&os\"6L,6".AXL>vטԤ6*v73{M{Oi@峗J52 %2PjEecWbJСn+S2Os(5r.9#i`d hSɮ׻õ L*tڗs¥71$mX%8*g}~kpKO ;1Eg͝:s>WzcOUe 6V8 Q{fDI-X0&,lgB@ިbDϖĮLHX<Г8a=Sj9 =Fi@9 U:h`==bS, דAdno( _7 =/5r?읧WƿhNi>@R6w763IR»k 8W KmgARAKKW|4dzm$i7&%bpҬdi* x'8"6K$63bͤ 'kowRsz4j<5[_6$B`~vdC"a̩'l6 )Ӧ J9$E1UA*]I]hIn\Yɘ|JtfkMMfc Ԛkl@*K@]XiJ~=XNl Ca,W _Z|d"etzaϓ NO߯R\70TaoA>l!"ɳzNF@6{uGR{73?Ug\ݵb>YeƻY%Y8kfJ)&Y@eE$ Le@E$g>Pcy>HZ9e{=xtOON SSd|GbaJB ~ p>KaGףd)i֢1X$Zӎ`TFPNVi#$qd@ Ĕ.@5! JA(eVł{VRbR9+IVkjCJR+LCYIVY I/55 ,LEִDFQӊ̉τE$,f Y`ɹh \Ӡ{Ic̘BF-"yI%žzc~v7Е7ަo^Cm( \_zݳ/g-nb\=ǬʤitUq,\k1{b1'8'l}o-Pg$f`_\qugjhD/D#Yta7n  %e]iKv#VSK8#pi^|('wϓZa$aOwh}J?\`7B pDZS%7n;c5ZC+e$:"0AtqZL:;tbvK} FV˿Bu Jh8{x`}±)?|`3ƬTBJ!x͊=slB2]ZD$" L0݊텨pR_>^I9k9 9!\h%m%7oek4[.:#9]mXAS"U>G=ߑv2* >Q[gM$lY+r8C1ӹk7~֜QXʜ%1#tR~sFxBln=@w=EFF\r ޒj6[X ?V \qGw$}^hA}'#gU*n28LuEtz[ʞ!~]Br]V M[t 7|*_V++0ZKUUl \8\itkf YGwܛ?FLw?uuZt!sQTs ~: 3!,97*2Z\+.ݖyLsȆXrB }PXRtgkagT FdV TpLHHEMe?Ŀbɗs2C*Nxz5×í+\.b8".b8[PMnP먴&і-EXVZJ8zcciK)?'/T}꜎Nt>5<# {dR7.}mOhZO~I6s/Y;>:=${uzMC$z,S0d"mICv2Z-P=GT:DTп^_t, =s"P] ׆ (( u_ֻoF2! B/t v78 lc(t^hP )ts`5?*\q 8"~H2ωhPEtπՋ=]FK6u3lkQ́! '%]7 ܜh[ې &7YRFkYަ2AJʶ9m* wN:%S^zPҔ6hC#BprڳuX')QM(%5O&}\D">>kREsF?{Ƒ ER} C8ɾdAYJԒTlg~8P!iEa[aOWRA APF*yBJ́#!B t{Ç?YτU*uS#t)yxg*NG=w+] ]z#@,-Sr!UNhEx ?&AԳß,JI-N+BhV8[j w[; E⭎{'ezz:"z/$"1&݋{'E׊`ӥ$Dn.dulh i(+ղ;[3_PmBi5ZPq\TH cQL9k"(oAErCe0[p M Ol~wfP@Q.X13q/G_7ۺ&HƱ[ܮo(?9O _t*:exws [rMrIE2L_p #&ݙ(4BM)> Jo'dfȑM!ήDeGpi~콦srMʝ_ U6|u܍cgt7^tί2'keeNv83~\e|"QQrr Yj6<{V4sd^_tb{j\Rz^S+VQ]p2 .kGO8"'5^NسY!D1K w9u B$#,S %"h ʡ4@-ڇUUHV}eΚb)zmuMHb 6/i|l+ oE!J.yL-`M]*nxmS}pEnV4zJkK^!%\'#Bq3fwnHe;MED,d7p-hK}VViϛ4oI1/f}w3 ~95[}L~2rJ{{G? ^qa߼}d-Spgq؁:'f:`jTj69zD+Zg:; ibjB~s38x 9F| zNv>s?_6oԉ!p4'ӽ$ŠP~|!>ܗrFTD RErSʘZ$'O\6@9"jc i .k0 &ǙiDT=V47I/gi5/g~x yϕ3't'qۣ[@R.cULl)CYupu_Pu7FP%[)e)ja, h &pjXS;+k B23:Oz Ga[hW8?OvQ 8e# $՜ڊސ9 gg Oю_>ݬϞޗkPnn+2oVo4VopYyҳ+م Dw3ᝍ1lȷ;ξ)ο<#Ny; \sf`$fdSa".|4u\cgFZz#xodx엣2Lڛl mM>7ߵ8rs^r]8N|N/_@5@Va1 G¶ ư8^*[ ԇБVqq9j UptpAPg{T?|QN3>x XHsO8Qh|"~B_&hlY9\|WKIMqz}YIomȜE T먵A#Dd +ҜxbVG,g1^~޷4)*Oh ю4SW'#e||w7+3%/<<.:ti\'ySglKG/e.pk`BA0vJSgN攕\aՄK_~{3qg?q1r9[O\pq7(K`ڹ(_Q#ٯ U\mMG@=K_/yocJ׷/p5>QE%X{Fً .ʉpy'i\Iᘛ֞Y;+Z[O70 xa2AKiHK]^a9르S|8."[=!>+VʱO÷{{XjJ;{ &b__IARg"F$FQH_QV f6)2*z56Ʒ3C7vL-rgG : 4vk^KBkCs%#K\%(Q!@L²DVO՚Up4=GݧT/x/M1 CzdDQ&+#?f7Ƽ*l&<hˈSQHex*ƐVaB |Oq,lzDvW [c̫&j"-r^nGswۚ*GJ_l.%7"W`  HۉOÍپ˕L-Z7oQI0I *pOǵgw맱|8*p=81˦j#W+P@=nn9#HspZv`Z$?wkL~FkNy!{4 (ӌ LQI%јFĕ׹\5aFhCCכ|L rzDco~'ƒR3c[JWo{1B7K3pR|iIKG~5_z w*zNO""~ئ/)8 )m~g8 Xץb'<64K)ϴLJB}祢xVb4NHyY)k")KzͼLᆙp+B2k@ZzMPÏG]0Leo]N&\?W.Wp8޻q #kO8+xof ZڶPO6ʥ{Yeers΋l[Zb|֋lg?-'Ľ6YHNtӦݷ} 4yB`%ز乐a6$ G Q?ݍh?^vLVJh/]RE$ӧ߇ ,1Hi{8qZ )spͫl=dAЮX=o ^Q ظVnA|wX'[3] bIjqY22y//js5T\gyfvWn瓝y"r9Rʔ'C\I~칻1 =>`?Ar~PQ/8|:JKD5S-K;Mtޙ<8Y2 3~.Wß,JR-WHRp}HZB5|%ds=zl[i"=VY"PRpo rE{& 2ԗj& ^}שk*ߕT+z:iMAfk kq:kAWFER}^_Sneߎlvxg.ӫo;Swԫu3kZ4¯.:Пũ?[4ןZ)2@x8HS0E0N<#s,THA " Zb-x.~^ts1R3t.؁ J˝] )_9_UԷwR;nWܧH?}w?Pao^_&gbW}5 =z#%O<w!.뾃޽ p0ow/+Bt3}áh(d_s8ɻnnpLֺPX&VFS}4zn!{Wwh׌%Zƚy-۳rHhR,XQHL3\DaLbTP4"8Jq`BanAu#RXqFS)zc% z*$[NJUSѝdkPT@ 5XU}OZkUT!W)##g3a7[]7$eTj!?{WƑ /1,rVY;P)Cj\fw ;l lTg}GUV&e厽Ū).C9D^Z)46h_;۟*\C*R5\Rz? Q`grsާػ<Xӄ@_ܘې, Sd-5K/{@8G,tG5;sA:=G*yq0C @i ,X.Lފ/ȝTi5(?~F!ké[ԄglM <[;-TOkX-1.Bk[˄X%AcNb׊jgS:hRA&j+qUA`VqBVJHTj$+@t#Lʉrm]y&Ri!\(5KB0@nPcʗ# S(efY;je[3\BT Ow zJ){ a-cvn2[6F?m)êsPaz}Ev |@M蠥Y=:L*qNWYv~,v h86Ю،T289 *Ҵ)I稢x#[P%ecJ@.jF:`Ha P;X sC{#9=7o) \DJ BrA$0 ir*hr>EH< +zI1FKAyR'#u"X/'ʁ>8#*hD L#)a^j-{Pb9h S_\GRUcOw̚%+"щhQg%Z M0dFZq{Vmfl e>Lʊ|T=Lf0,kW ЛV\ j.}[z&0*LT"W[ Ki4Y1l}xILf͢U9pE*93l7!{]gKZT `.JE*Խ:WԘq8<~I s^;߾'/a8˛5.P3|Qd zt\5,(lb D+6ϻj,j5@o|E);bgAw?k?RG"xEbK%R%:HRߚ[cUHRY#鬶Y߳:@*3xrrJٞ'f嫙YcJ?X~VC]s s۟;GQʥhƹN(Hcp߻AK)Ցi5aj_n|l5L M̡jCթ{~S@p!{RFTvhulo|jc\n ;ʍ=0RYZ8!Tkʵxr4Iho%cAه4H*`\,)?Iӓ$F2U*xz6zCq*B5Š4}Gv@KsVߵvkXQ!!߸6){nM1":MQG2=[[eJS%7) =8EwpS7 gQ@D;d/GC7/~*_޽>y]q"0Iu[ך썝q_n76otE)}rAA iL".J^Ô9_X, MEI6.SZ8x9OV6"`53hwNg6}LJkD5d)j>T|~d2s{x6!Z|4kV)n@f;0r/R@p!ovF\|mCJQ kAGO!Xb%kK+<(\r 6@s7 )ҝn(rV^ XS8/`} "YVmtػ[U? GeC3_ +# J/@0ѧ>NQ'*?o3|w_CcaȾēi˃[p ;@ሞL7B_*#HD,}ΏJ*k;ƥ:JS"8%@LʾQJHkfJ*NylknAcV[Jҧq vٯ8m{ɋQOC/Xh9:VtؓAKtemʋ}1J`}>, `1m-尧UQՠ|*:PrOg$"d`e.Gp@Pi{R;vkא63Zo꬛Yŭ!_[} 筛CWWW֔vՉ| sYuKٝ=YeK^T@@9A1_nIqzގKˠj}F7Kv>m~:: b7!K.!ޅŭ{s8@)nmljU^QxYuA ȹv8\ϕDFaʗ-Ì(AR޿gYǟq/a{7-sxVd\VDy}Uh^`84@|O h<\xюtg.~1.#iv08{_3`QC5R_w_c7VDu4?YUڎR+n##6Zsy\Xe5D x˚>_ٲKkWKi;zIKޙ^.R@oGlSL(9b\1=TWTr49 hڳ?WZϧv̜Sc[: e 9EXih{|h\ 9okҔv w 9eC8G ŗ@gw 9=fȨҭ+\}*Br"TTN6CRqa]NBX]Oqi(EGA }>:jF\rvsb}M(ǰmZF%#@ RC%NAUME@˜zm~On2Oh߸m RBM5焿yk'LBIL+d Mhp=\PdI<}\@PFbd roS)Q!DT:@C8|tJ#L.htI! ѧ-z]xX $ne h%s@uH4i! ]'6'G6zzˡ!f3,$M1 ''P@FR(uk-iPnC˭)rYbՇt|MB1II+|iӽ9x, V(׼Liػ_CA gx>\_lnIC8|OfBh"*-gdǓwMn-3l]56_Xycc8K(=)HE9ϵ:Eu5h"Qsu`!L>w2Fк䬁e3zEz/Ks=>.wOŝ_L!3Cf2g}_ M)TE/@sMԇDh$xTVP-C4ƃ5}]pE~ȋlEW- "r "b6W;{XzMo|M:XEqfEe7n-xZq*P`x A"88(u&?AS'p`&qČl& jE(JR^ 2>Қy5]0SYSR 4K]kȂXnkPz+ 2Lktc1NGE10Dz#^* LNꞯD%sR AޠybƝH#+I䛻YuDu;U5SnWD*C_<,aCٲ*jNjW"Ht54%!,qVFE\: RDAALZ$Q>'oC&M YaW5{ ŘT+ = n m`/4x #q^N*Vm5`LQ&^0CG*N19Dy 5mcj_1j]gUs4N7Y1v&7"QKQ<9  :1Er>Z!! \ƳDY Jj ݺΪ+y >P>`,C~ N1&&9EhAp@|r='3u d1t2S1無@WRL]ۀ8%`6 dS`Z2bQ bf ! IGc`7 tb 2sŹi,\:Q!cɹ z" Fd 1!E%W**o8E̴5͞&uBdpi&o`FY22'GdS̤:A(IdkLF;Tdź{ egG5o^u^˾]Hy ~Q?T{J_?k j³Wɡ?`f-FȘMv9?㼞l;]U \f^j{:|8 4Sh7 QQΨ #f!j9?&hma2/f'!&~t՚sv=hzOEggܴn.tj~sz_OdЎethj!MRnӣSq6#VOGvZAZ "ArF2[@[9'no1p^[X\?h4;5!"$#dR]p0hǴNnvJJD/?)t( pz]X¯Y^g{D䍺]'gwߵ$Pah.՘jJdןo׀:wm{޵_qר:k4C> N]y>-ή#Z볿.$vnH``;j%v7+klF//o}jC~T#$?GeP c]a5^/^ JVS=-yRbg=q7΢I<%;MRŽA uR׈n^h!6tK/9 n}p7΢I<5]QHTN1mJ+Bj&Gn [h )*]urNzv{>ݝ.ޮg>lu/߿>/.z Q_nX"tn m ֬Ӱw.5Q@ɣ еm2le&&|/aO (~܇Umz˳~xuD 9lIO.@E4jæ"K;5:(1GxD j(bN&U=Txa;'NCFU ltUzq6-(!bRLJqC%LQR*u5 )[B fW>F'63&;_HV&xd-P] i[ɧk>.o(}`ᛄGYIGHNu:*7rҢFiozȉvN=tKF?C$tuq*X ,if/eB`R~4?\cp<y@rbiUXd rC!yEUE8y%s`JT`-b9}^BIRZ ! p#%xpJύ+̩%9)ğ>D 79cxF\P]^, $UCK=ەgA},N"N3jj- Ou^Nf .K+4#F⯜iE YGIMi aJUŹS ʩNx ϐ18`dVqF:5ʩ E(E. ϡ7n2N O[A)geŜ[@>jQ(\ o ijP4*F.] YZM`[Ytʁ:h˱A]GPC(#D 'ݣ"vi׽.uZK8"I΍i[wyܴm$ܿ^QTs*ő;/8>U?pBHC;ZK7U+9ϦׅH tØ2M{EY*}Mʛȳo-_G2t q؋6i`R2j3;gr&%BPBGQ?XRH#QҗHܪҢV(Lwc{_WRaČn*}Nhl5"A"ɸ_5WL||~H Vr8ӓmj)>x!L4ֲk{m51.b8,8fFVtEagH1eTف5cv?16&API#U945 6p뉊SFpч& eL~1a54Y8Zk=$apV8BWDr/9]OQg%#VU(+ѫ3BATXDŸmˠP sWJF9V%ZB)XU^ߋ`nV%( @dS}Y&xiKjL@V<4kE?rr-$1kO@@' ƵjN- -& %h{ZTP:JUYv;"@0d$Cʊxܝ-j gUb,1:Dw(k&;(Nʾ. 5zNrπiܬX=;ƃF =d>{)c7D=A_8Y㑥 EI;Qjp:&!4LDәD5V?MI O=.t0D'} Md}`Vʋ3%Rnt`':K+ -9  󂧠 *@A Lj<4+0 &;J Es+2pCc( *?\d (*;I6I2TmBl} Ӻ xk=p2z~ 9a\Q6s66;&#i:4 <1^G޼ہ Epވ֨M ު[݃/?kŝ2CB"*Mӑc"n^S ky5=#lR23}mKA-U|Fx5F )GfEHHx%u_j$9H!Ow܍gTܿPTx%ҲTlo`i6,+WqY"l=A"GA2rFWX"Q(S*CH }{#qhWþ8hQF׽7fm+llur}SIvW~Voc/Pڍ+3TZBzZ4JPnSY{ϼoա:t>_ՠ˿mJwoulTw8xw J,8AA>[w_*h*T6QR%K365YB)DNI>@`ڽ>Ⓥ.IhmF4ݡSP:lG 6=EUܤH5ō&CN\ hCw]Ƣ} QѦ !|Nbeǧ p IaT߻h0>fiNiO!K]T‚ߗp$850_xLW}V ??/I4Rae@a6b} ] iLЫ;lsC`l|'H#Y"..>|km,7E&ed `&A^-[ݒ8Eٲ:>=nYU*g-_Dt^ I]A4E>AH¥=f~Nnz64Lmp̈́+7 m葜d9ʒs]hIuCn[^Cz7CߤZr3i(cJdf I+`yN=bJXFwտ,W~di5nϮnbqۋI)"tydF[F AHf7!nm:=j|6"`˛7IHtQ'p <v56Yz2Za˰™r#^RťAf,H>] Fbk&K؀/$@&\:0&9wDut$"*HrQXcsvRlh'>|y*RQ)I`*"ĩJWY%yc ݳM[` E*2- FtubУ0`qCe: <6=sPZU/#LY ?k2qZ,+[< oOq-llMxf4!:)'TגHI-2bԢ,D oۀ@\6Ns <+t6;؂qMuzr`b(=V;"_8 *1ғ7ܽNo ld$+Tبw,MKg=Ge+L k<۽mm+IKSSS ͭ3s檎O>E_ j<G) AJt{TP*mcEp%DuLJeNJ%4KfS6<];#yߥMd0"OhGhDޘ- M "-~$zF#i f”}֝!I!2x^>A+; )s`g''r`RNsc-c5TY*͆8ջݿjZms,1` 8ې ЬBOC\eV%c(s(Ӳ;Onusv,E%bת,%U|Ru2҂?DIvF>N_֕t)` ٥X;!D$ڈ̐7xL@w"5-[ؚ.Yڋsi3ǰ¦ & /Ɗyc]򻄧.^2e%4i۱+)TaXq єl!L#Ǹ`'2OwBz0~ {\XjVdKRweէHF띟bSD4 j=IƓiR8k]c^/7pa:?~`/F֬x;f\0G8~Lo n_G~" [LSd^OWI`ҘلS p#2?폓~VQ—;jFr{|2q?!wh=́{al$d:<9pb }3sբ>W]zS{,Q|JGZ@BЩu x3 \ c=DUj1x:1?f+#F ߕ T1y#c" cfYșU澯0f&rg&}m1#7# X*3 +We"OB;\*_Q_Ԓ`T>س.bI2qUfPn+ͩ"VqJ̉GDRDZ5{akS1r DA$sKy^B|FRoAXQzدҜ\FMPIb*GPS D&7xub -z+!l*ow**) j&V%OZ U5ۮo;j d^XтouMy4mͲqikQjiPx<ȧ\r-?֟2 (>fFC~52͂; 4:\PNɎmv4Юί>pIFB@[8Ԃ֊'deX%&oL Rgh(YLWFś]XɃ1ihs}X (+5gCf1̈箤,byjdxmjvyP U#_ OA VY٬-M~]PHrp֢ɭZȗ-!4Hчq$UmzԈW!Hqı0ӡu]UiܯʪX˽19 I !9fZdZ:a12@ ss-{"F˴M6E{+ٹڛ]&;̤~y{{p.AT( 4>7]P, jn}uֺ`tBe{ؾ d,h1c t_1MsL)&W5$q2BF4K du|@8)cC'N'O+:-cØk#$ UMlhՕ0!4X(阆m-hiT{t|:;<oOyD*H*E1/߯#"CMG&Z;"Xs+`NqU-("cɰ f!0m'R}6.QmHq pIyR*:l%i]/ۦ~(g(* B9}c*Y,mK!R[ V݅.!U-o$/5xe I4_OWښK{2ؽEtuٗ>)@P gϯ(6Ar{6}?f4mA @VtcK)7ƀ]1LUc=s=i ~[xuRB7wIA -}%2XZ`%f%_Em7u/ksRrjsڮH+|YɈx(m"|,*YOz>S{joӸzCHe9PCw'\Ohݝp=}임+#j=TYM]]^kA|cĩ5*-K ʐ^8I뷵VFڬD2 [_9v| *&0evW>vx@XsF{ ||yPCeBB_W(4;<t:r6hT}$\~~~e:(SY j"kE@J Tz**$ycvOW7 ,TF_%gQOu\tm;&j5WC/ՏkB6nIYLPw;I8pMVf,d.^lyYP;I@~s+ MV?@?vh dV'<8|<ô9rE˲wD.P__>zɍh:C&;o\2޼&Kރf pxUT2$ /XYV<"ǃW|ٟ%%>BYY5w5HZFI>~67FE Y(5Y M5h=z>|zatwR(lwRH W+q^VI=Ky~BW0yHwL7;HCϫFE%(r!%~jFYHCu3i7e< dM­&C:YҢ _@:ݬ~N; :HI#U. y_L/XFVܡc'Ycj* x4N+MMyxûcl[ ]pϟQ/<ä^VTZs:^OW'ix!oGHZYY# R)r7\wS1hOIe|`ܚ1tA1tj2@/ڌ>}=~z ZkdL;7dT7G6Z߸cvGX؃?_e~ycrwWo@5ɲ ##7oѡ>P9g9 --ѥ4 zwȏ!-DSޟwE?<8$~J;=i32&I LTZf꣡ Au U]'N`',!Fr̦L 3p`Lf #, `NEJBR65BCM?9Ҝ,%cN׍^De:i>ec%]YɊY>foü* (4y)>t! H[sӊ/Iy]E=|83)Jpŋw?'o ᢣ#jDvDa8r2 :B;X%%rҔIG1o Ó?I,D,Y6ze32 ㇰQ0f© 1B 7l 0KҤ/.n}o?b''택?Bч_^+5;/w>~˝5kt|;/~kvw?_M$|0г'';߾~-'ɧ71o{'qq׍{k^&wip~0<ϤBIa7~y:3=Ňpr@xz|0-My7inLɿH?bqyX.P8(~X?}>G,lӷ? `ɏ!G/V銭3_5pplĜc_w>Dlp9򄩓:9y!ΣEo:vi_@~o{Σ5F%oց{ 3[.扒WG9zM=?^*4AO:ob^~c oSA9 \}ESh(CmF[8݆p;%ۭTË^ C~\ >|귬?x3Z@γrS>$ivNn3v拋'^^ףŶS/~5<ްGwty8ߩv}H`0"DNONHnO8I#ś8?>+~uXww V\_40׈ZiHJ-/`WeDޗ Kp|IW_d>;) "t<"Qr%+P33 _1{Z+ldwDIQ J]k%])X iI [ZI|  !t2Bǀ> iZ>M@!Mo.lu? D>C(J,G$;L6Gqw| Gc/'ѫ\b/f^m'/=-6$uBʈjxj*W}n/Btt0żf3'c`A+-T DЈF0G֢}o?jբ}E_-ZSZ5YYYYV 3(tfz(uufl1ˈ|J*2x߰\&4KَUB2ޣF׵Ijf7maKmR%{[zNdX`NFQo$KdSjA[ Zւmm5 k1[~٘TЋ`:v%d+ËMT592H}GR'@75fb3b$BFg Mog0 Gb4LFꔎ1r\HjӁER^_k(I%^F(r~'TX]F_zCHbeNc\t223A-Hd0_P ASb)Z`)3,k3bӞI!a Q`sMK|7ROB(]f( s1+SER`FXD8 FV +Fp+ "$ 7 C wQ uUVUuʋL*ƖRVҔ JX*#C!z08q4P8G`, (̀c&&*0Lc*1PL R:Ye: }FԲ Bܪ< o˪x+oO$ZʺSϴVtOͨL8+RWx1RTE`U]껡;#{%/ P8j'յLfV.C;fۋʴ:hsXN܎qk7Vk%!-?FK,~ji)y;ޚ&KʌLMpai_٫a/ˌwKFlڙc-tŪbWxB{y1ҳ24XơAC9EBРLh) [&ԚkH$Xf GFPzw s -z@ȪsKD񶰙u?|&SX 03)`%'}Zܧr)`}\u}X^TTh Va:>ׯJ(pb3Umaqrq\LuЀ.Q`zQ#Yzfhԙ*Z*I 1qd 6pBrFpEhþvC V{/څ}uzqTa_ڦa jUmwqFJ*rdIY靹.$[R/ˢJTJM$ߵ]C'e 종I>j ,j+9HJ䂭+EzҜSA޵:Sr2jOZ_.YJ@~Ɗ)P?&zI: B]y7T$`dZ8o+SlxF:OȎ =>88vq{ϧ_~?ݺ iۃ7]#UʁFVؐP HU!Nr1sti{=]ڞ.2g\[aL(dLf#GuAa˰͢) J3G!KT#JwxsK#fD-_ePw}&0͑nMYi*8!NzD@XH ye[*EJ9FqE+qCw]?-mn pWn?^BBK sV.x--S.A@\E5CNOҐ3 P%sD}&,]+c[c&$VM1]Rr61FJI⌮U C`/N0e34Ơj`:Mn-Ec 9$Z6drLk))EIwLUH'"53Ӣ I4qZtLU5{H7tbv^WР!ZU12HRzi q f+IYmPJJYP|G$iC)m( { (3ɵ dE}A~6Vk2E̴<㞡'Q1Rv$H#Qj0^Z*J]5M_y1US+=9j=SGDLS%y0 &&r&Sғ6_kYRDsicҚ($%Zmف̷O \Hp3uΝBjMbx8!S8;- s\0)!Z.)KbP9qJ"kbu4x:p׺ UYyHY"60FHi# ̛(Ñqb*'qmli/ؿY`gԒP83=FZsF_gKr?&VƕB,}Yi$#]moG+ptwUWw` 6f905f,SD9NW=dJ3cZA,TsjV'>XF%.$!JE^?;ђ@vc 椂N*sRAS<'tRA'tRA't*ht`]>&Sb~ȡQr5s`9_/>J괊{8_/NO8PpFkۛwR ƈ6KjwYTX'1aQo'TPpo@b$ҧ/pǏ..̄BB[VK:%8xT|%l<*%u ;XIZgabR3ȉXrSmI۞mnqҶ'm{Ҷ'm{ҶOmﱓE2DX3QwoNF M+N(j,S&)%ŔFDa;70a_[X=T9ԣK&K鱳U8s09?w|5jن3-;q.jgc~{1]5y1}6 %z`g7.` )Qa01X"` /= ޥZV+R`Ї`sx:yK2`1ukJ֝QsnY~>c:8,Od/~Ɍ^،% pebR{ύ-B6j z S ņ(MV#H'S2RB2998OL"LeC&LRdLӍfNCc@c.fg0!0_} `a!*hˋ#TF) u@ HN>+*D[)M#`1tc1G'Rr!||h}AGXeyB\خC<|8r@kg?/7qɨrÙ5A lMt1ڤM)hM $TTC0i7FXqBGM\f_1Z__-/oCE]6%~ջ'Vs[- x3PVk _SL'gjb~zgs7`1AVocҘ/p3Ax{UJS H9K:sYxQFaقI4$e ,E)|ȑ}V&0:Kc!ɖ Qz\,e)0;gFS øOGQO[d*^Cme֑fmxIFJ86bʘd&oQq]s/ح剬tH/J=[i^JHѱZ7R')H+]Rl cgIrR,`G =gڑgAmYPay$MTIfd8+d]N3xε]mj.Gr(J s`;GV@4\]J z HiK;kx1 o; s1-qzHg'C7|S;RfA+$@51ik XηRoڬ:w҂>Ү6tIKvC-4$Gm+ ё%9g=3,K/ؑ%Q0 ?$ 5 dlͪg0$o}ݦZ2v $;Fvtgɺ ȒYI!Ye) Idd[ yhf[Ӓ)RLՐiIS:t|8$o|&)CBqjEU/_NV8Oż[,XoU1_oWM|#w{?'W~zy*oyuwuO"wFo1SܱODu_㫏sY~m7e)F^O;_Vyjkyi A2ϗKNR%W Vi j[U/7o/4"e(%h,Z%]>Bf]/;B>m # 8^X_a$/O,!Z+vm;o^<rjgqzX Kp5bNa܁q+cc#ݡbY3A1Jw~c^>Fp|>>/y1ur:婧Qp|_WR}FD(@HFdca|sϮw;Fvؼ9s!,t FҍF4h}v`ul(zY "CA֟:P.KikgcOvmXk/_ak6^g=|Ԟnc!$6[uz[rȴlfF11n^(eO@J&ᑇuجu-2{ YDeX )Y%'g~`xa77CŸ.t_9ӫe΢4{śbehJ(ꉈM0J6)ZO@ɽRF'@qcQNt:(y`WJ ɲQ_k fcl%#0#BA+xC4,h26Dpڗ"kBڂ)EiΞYԼ\kM{od\-CޭJU=XF%J,H4y2v4D<ʷ]m:)?qyy0zO{AK LKRH=IZkZƍ-?.[;]+ q.hG;4밦,eXtn6j q,ӞӬb}$Ӟ p6L=_C22)6\~>{*|̗Lon_I*&ӂ>["~~dr_Tg[+W_YJ89^^\/O_}ʗg4geG; qy e.Wu>-yiy'K}puH{]]ureYŨ(U*o0#4>x =sv+n]I~>laϖk82cfXqEMǖ`aO) :g>3z.&Swmq9~Y"Gbb ػX&$/u5-d0}XݲtNKK';:,"yȏM<>k $idZ&q˫{|\HpQ 4HZX.=B0Q1fQz%Qr+ASUhtvߗb*쯝/5bFR1,xhTfb<)JvF0N-s&,3D[jYvÚt| c?JpmVHk"^ʭ YU\A$IFtV]Dm5yb]ڂY(E!UZ9PQܶ#ݍ,xiMɅ2OTXK9_2;k%MbZJ@*.V?f ̈$^PHuPJ{ |@XmTBcuXII<,0~Fիf6~]0K#k/BCq#b٦ fmz_[[Nᔮ'm¥ky=/nHgn{PZ(p#4~>__ ?9 .ʔƯfnU3^#xivdXkڣJ58u>߮7e_WۂRPT <128R3d)Nt9}¥dW+*8$sQ[9ɼ>Kckӻ: ?݉ۻ8Swޅhl~4ƋM>42SaCd§HK' J͕ $BQhess#S4L0'pfTw=[NvmB E\Anv*jthU1(72SE#s,4[ª.uVpfY#vӚh8F]%)(?J^XƲ=//\\P^٠g+͝{7+wgAy_x#NObzPo3L nGY\qC֮ `cȤj1Y;+a"гJ$\-NUq䙃,5 \@CYl:l(LQD74+ G4AJ}+^}KǺ$\Ye PX3*)GW9ʔX`>C6r*2W%#4{Q<Ķ] mb=u[)I"= Ŏ[YySaiԶ^pR r2fNZl O #tb G)=~L42CnN\&3pq,k"K(M k  1Y)Y.pKaޠ RT&ONY!3V&L?悜HwVc=̕YƩ:AI) \"Yl eHz$= X!~\QC;E[SQ^օ>Šs& MY\IBBQ2g.=nRY }n-Mb[i^IZ)ڌ㢹-D+1d֗3̳#V59+p-VZU]V~ ړ#w L XY_))1ъsv!!U)ucZa8z;|/?tf뀱_x#Ly(pҫǍ6K/hn3N*kRar7M@Dȕpܼ(vaoNݮ>/d3P[ y0KLː-"|wHP }>ip P7daWa1݈s;WKY:>* / ?[ QoF_[Xz%# A ̷?vd50ߡDU wlDŦQegڴ`ufI 'Pٽ tpefi; 8=Gtٗbd8@-@eFʫV+a* FBe(E5{,#/;\GkII7w΂}\6K+FcYFcS+ kQl4!>Kn괁Gy-e K܉솳q钯u_Ã>?_}vчߟ~>>9_~eo9X_'GwCZC*Gd6{r)n&!rePTm5ٶ$(n,XLRnz>,Cߥ듐wd̙ľmxbz]zW79_%`zM%粷O~0TjA)J!d'ycG . gɡE"3,%j']̤/Ї/Vk4Εa1[4GX6pޘ<D1qixB@]/.-C A5-Lg`8kvS7Cf ][P,dFr;fMVR)^C,,V.ReoyUz9Tr]_5Zmi6r}W)0 JΖ5+Bq:هjwZ)ۚ׉{PMT.Ov[@B~U-Q~VJ6t?rYcLI?*A +͊\;dC~vdv*+NY.*j M_/?^>T☫~w9r:ˏ.'%ѱ o]?yQaAU+%*V*|& tؘAL}]VJ儰V넰 aݘ[`_' =!, )rz+>%jsX,Lh".2R'J#H59(~2)Ð cq#$R*3ÂQIC2;rܛۻf3< 7_NNOKXeV̌c?+r+Wl]VKYSߑk͇mXt,w/'[r P;`VYeydZfNS@ɋ600eAH:2n)! uV969zE2P#t!r3Rc#B T#HVmPp/ӻw3wKC7 i2sh|x>DUm՝mArv_҇yzK!}rbǕ~-ɱmy@k/*yC~W@wS٢xbO[Xa~}dH 쾅u}>B<'lBom$s'Ŷ9!K<2~Τk&@]p [/v[ !R{QDٝVdsa6Em Ò(]\ G_-}wpx܃\n=t0_lL1\h+~H5mOe⃝D yq3wy!X]n$nЮ1V&F/뢷aQ&Lڱ 0 b:Qt_}0_GiXB˛;dAM)$~I'xcx.gØzsQλ!aڻ'pU!+DaO~Ro$OQP 4''@:)Og L䇿z(0#I]8Bف=W?q,O,JsZZih[m?ד.@o/;[Or,^0>@Od*ՊwE\ٟsYLҦcSvSR&,Nۢxuz)Bͷ59$g>̇Nܬf'""iZfFkXA8feA21qPǃ~~X-ۃŢ7/I,?JWZ<׿<L'W!7p0E5&nRTK.}Lst)d, J݅3vh{kF>jUzj*'(=Y@&vH<5^7rY/ub3}JKta#aJ<]_q:&si4&F@.hZA#û]N\ߚG=OFࣰ|R%g% OuK, П(uE7nF{zKaO%aP`( YW S/SFNZZiX͹'hL,Vh Ƭ9#ʰ)hߝti} CrXgAcK̯z:A\6FHMc!&Mf!+5[ *A H,0fóp*lyx"a VwLK2MyD W;T-ߞ$/ ¾+How!\ 9AM'ipQPJT߉t@=y̟V>.ů5LJ ӇR0}h09cF03c|HB 1#+5:o /s(iEubRXV ;䓰0Wy2?˻a]xLepޏ;sn@xBrL}>7V0;-HMvc4.>^t+FYB䘹EUjZ/O?e}('/\&\j=D6WzZLx,o} }];ASXXXMdRM о1FF3-%JBܬ5hxq"=aQ-ӫЮJ6{WF<"## ;X`zfeg:-iIMR oH,VfLÀ%K<""8r8ep}.q-)fkhcs! |j{}Jfw!jtQ~Cυ0c~di?~.H(z>~LpD4LCړυ1v>[kG/FPυ!6]D4?М.ꗇ<");"ltrAaaȄ Iݘ`|wIى"w6T+NaEԃ8DO4@'#8Tu 8A񏶞V(b<C8]LL-^^kbW~׻wC=. Xub]F.O Rڈݑ !@; 3]ݑ7')x4 MG؁62!uigUg*%}B G9 VPOVRVpg_JEnEU(Jlu++tAmL/g Z)ύ_3Cgb9H 4j9cFi.L@H*V S8k}:S[u0k#O re>ISև^>Di﷼'2"'2v_M!t tcGH!n0Qv1 W#/*Gy,Όn v1Jїr[ pt;6.TNaZfQ(K-8vn [QQsb%dKgvFb/V` X0 n |HFRt S9_f@Q8c7E1OHUř36hYݒv-X =# F8Xt 5ʌ.I hYݦƴC3>5 :2+j\ Q"bG[QR+qC3@4k^:e^  IyCB_QFeed< \"pl yכ.CFQ:%;˯0.婲TɡvL1ER. $JH$5B%JxN '"2 f@)d9)z`͒QLSrb7`<1paD0oMh7qAVbiKL!CMo-yepyn+ia8NenءK Abh 1ը]89 0jЕSeXt5FRv!E~thF_r΢av `B*B`y: # i2l|=vAq# e-G5pζ(2WVe#HR.FDuB;ѯocH!_NJm =lqZBNhhFM<{$<3{gjm%AjvS??]__cKf#VTDT2g>(]pHHgEXQbD4H]V~]W3~zrڟPGA.Uv)LLY9 X];HʊXUUˌ=ӌޡٹK[_ *nxCY%*'UiUԕ@TF BQhX vZ 3) .Vj@RTyRq` J,dv˫ ˸6} zTň"RPvaZdi<}34 Vu)=()$+de!4vuIy$! (,센*Rev[ ŒJSH@n2ِm> ـA0_;{?+տˉaZzJW+z>փ_/糼m}[Xy%{S7u~M3bvoo`d7UT.sJѠʤra7,UVu,TY\Opbtbal5+fwtwOWvF-UAY}*G'?ǼZݖAf۫Y{g. dȇ/+ %גk"wܪ$@3.NtɶQv7Նl[H!08XLM.L[ 2ԘgȈ 2OYUj4U&~m 栰_3D$O>tWRYl"h`Doۀ=TZ$?kڻYVؓKUQ*KFryne)!lmG mO8`/Kb يU|G~xPxL9>ݎ1D gNTB~Ro4e_IJp]u<[ǘ)46B|\0B"Xx<]遒Y[)1|*~f0 {PA_D~2~㲏F3>Y=1$ oEdNk1B!A,EZA27p޶ #xcX#bY" HIV&V`fJ%lu*T8_{dC N8zAB D;bLl QNĸjBː6K`R32 cOrZ .BO_!wǯ/cL!Bz^gwV^g p$A g3'1T @=P 79T77^nLh[LYLwRq_lex2n;7JK3 2Ѩ/O'B r4u ryiz(jAe+u~1Iw4a[\u Id~scu~1n~6n40*<$*`[ GW,j R5q%%H w7zz7gd J1tCŢ 2g3ȵr R+ڹ.8_"֭aDM=nuSϭj_Pp,xݙ;4dE YoQIkfJF<7R\~@/)l`b4(z:mca_ 8]e$YVJ%3L"M MH}9HtQ)k6E7B[jJ?|rHwo`yx>i헫Ҫ7,_,)2Y+ -W=k l~o,',x%஖*JDH @III8B%3)2doji eV|ZٿC]NDY <߅Ҏ"k)hYN؈,vSϴz39ӚnVd̘3b4*E,ErF. C:)pb;%)dmm:wBMM- #$̦O[c&S>j {@v-ۼNLxgEH5in7S/<%}֡GL/9a^ݦw]|>@RS}d'Ch󁹘/t,Wf=80i1whTJ^YeUV$OLE85ljq( L4‱yy[3'JUbRQ%BkRtЎ͗g[qfMrUWb9{-MBƵ VW.l4:u6pF Vκ| !1cᓧQ |pgi?K4Y1ij֢t\H.M=c9æ~D}a|TBqR]1Q؆|lx;sb('L섧L83 C_5o,X>=Z. jq ÕR\E7\!4*{x_^-[l?^ßǫ<}H|~ueO뫟~=cHVAur**^B)+fWpeiA ; KT?t,_K@ (LTP*HUހ3cQ4CQ0-N gk7.Uu_ #ۊ.wm2/2Z_YalKXcn=-k>0Խ/>X+3yf=f,wcT-I ؓsf=Ӻ pBNYÃzzƖ )Զ ߟ:]Ԁr"Y|s'Is^ד-M 62; =:Z&bcDkN@i"L /wBF?dAkOE?>I)Ťe-ţ.kmE.B>{CE @J̿-Wo ޾W"[^+CVix+(KF2\5d{B"&oJF)FCX{@YEۮeg *e .6[4#+~~D~>~]qުԔ:2 m\K!Xt`{ w4,E!T[jd=c;cqRa0%Q$|s_H UB#o5HtAs5_A_]Ñvgrn0.kE<0Bw״.Y?>/,۾.=.]w2@|[ޔo $^ ]"A,HXkbɾ?i=5Z] EkžE?qg-1U5uZWΪGӚP2m8I Ww&l'ߕZنԵc,CsB;q"Kb{umiGiCo4+o ~efV*"l_b;gWq߅O &tš5C\@]H%+ U\* _]ŪCc3[;ˀ$PQ}܁Tg!qsAӋzzOA7o4`P\Ge{楗NʲVlӡn4an1=t"2<߅20<gh'*zɭIp4!U,%HJeDDBr"5ѐ[ÃMVz):saef_ev9_]ý>P [ 7 _)^L$&3ҡ;FA5<7ҼW{6S,m+%wi?pwjS Aq~q5 CsI )8􆽨n~x~%=>i65d:1G*osFun%$c. ^mI*MOŕf6<\Wg!ռBk޷Lri<,7P!;p8ή2Aا.}qa+hԹM{ƉzfrC%M7 m|Nz}g>A@sG[˨mA({~4=sg\ Y*'v+l>k_i50ՁڍA?[mp OWWL;_@iOPA3km;]#w͟1BHpwuzYt3w䙏Ŵ)Xy9ک{=YДgWwk:}HC }|/l[G Mr= :!f^;>W`| :&%kܣ8S+ Q?i lgCa*8]!c2[^ц1G"B3fBX2@?5-1mMotp ܮRF/dmYQefYPn*i^آǃLSAtmQ]58?^]DDi!kHx^B+6Y-?ݹj*rZ'o_bN%O_4K-=h84'/aOH$m#Z,l)-! KP8PAbkaVb#=v.e5屢M׿wq캙8cԼ}~NeQ>TUn~Eoej*DhBa՜ȐƜD0@IŌ6H)7~Zr5uhYry-?ӝGSwaX5|"d-6П7 LF`%wɌAe4@‡)Ivz0\p'W}vW}vW}vW}.^5!>#eB(MqJ Ì!8VƁ4 6a Ql"Bc#O=]Dߚ Nj$c'Φ5=1Q1< #&@0+&PhV[f˷kC+pe:)Ye 1[G%c Gv: .7:; Ύ@0e b4Xvi@ `a*\ͭnk7cVy1> [ݣ0H8L7 z ŗ==h;kŞ ܤt7Js8f~`ɻO~G}qZYCxƯ2=۫ן<;}#hl M*:<2 0K$Iabt"$H_A = r52S$J}K0Xq )W-6m|.">&WNU縑=vi6\Bfif61^Nn|GO~VA0Ϧnzx֙4Y]ws~swk{NTզN=<װˣv cxܗ}+a_NA,_O43ő%)x~ߵL {VVr&57r0*#CFӌAa*Sz@uG$[ŃQ\%H-K|3qg"M<`Q;vGvgPjK a~ˌv+2~uYѓ_53WԱ_<}qQ9G?\{*ӬW] 7_ ͣh dGQӻ"7aLGP{a#oҥ#,/#.*"RD+-c3= ּ ;}BRz U[h]TkZƒbSI=;'J:ZAиnn؂ƛ'R"tMWR鬈w{C v{h$ 0>2P/ie Ui:f|'2[;#eR+ŭKq[\\=Wb.͗ &ܤ  eYlj`:DEpwsOHbhp%'B$b0C#;g k!2嫀b_qz SAcN@D9@e$dLeۣli׷B>k7E2QTI\MJ^qq02&1Ӏ*Qq"Yb@p;-*://LBwȜtOWM7afo{ކa?[jum/7.Jg"V&wF0a7P]匽G\s$q^Vo׷mz}=o'JT5׷sYg6^-PO>3Ap_~1g8!{tX`YLk[!).#ϻOi mcn{`d+፿be(+V.OFۆ|ݓcKa|}UYqB7 ^.ˏjû y}"Kji͝ƈKӡw^/xZwz_L6]H(FEBچX)w.ոIST+Q9&72ce,z^ u9ܖ;O]!8To N Lr9u#]h(U0bHr4 0pt<< }{d;kI-rw= ,[!}՜!랱}pdI}+\tU/^}y!47*Ow`\&.V㾪I|Z8*-/nS,[UY(c)GeS.f-ȅ&;<_-OUKiܒ%k>V8K5}l¶.uPR^ܭ|aҕ=y>v^>f.xzR& ֒zwrZg|++RK\'r*x0G{&fv'-#$ +ͻE^z6/Osۚ[SxφdvN nwwCn޹\^w;T<|fyu>!Mvk2*?Kzpupv TX׿V+F:e-DP؏rG#c ND3 j# %@)j C($%j c(&1YY,sTE8#s`]O`0Yq<= [ .-؀ $-g }'Sk8,#@YiAQ&15 IM"0P\KfF!FbJj7[C-h #k#R(F 3B E5<$Ԓ:#6 K df χ7k%UQEVv0Ke%|Ji*Ԗ2Fam"$@/*ΨA.fV@*fwmIr{;?/8` .~ ~x(-Iۉ߯$jH= l=5OUW?U]]H NBɔ(Ȅl|㬙ǎ#?o4Wq^.4Q&-])M>D#,T`& 3qB .8kc42KIBR#Y F+w"]ȥ'pvyZr:K AH)p`!<o >~-7aQ/ |\$go?-)S狇dG7 l%~tt?.6~g,&O6=yWrCDֵ}:K eREj6XX 9rD??8T~ՇCtA1bc.0&~ER30]h`zH(۪ۖQ_! `v)&גCOh֝J 1fwb*-R zd׿wюߡ]ξoA5epggWo/ZG,+E 7g_xy3/K?%#t򫭵yϽ|_ W֟F0BZQIcőV6Y_a)$"IJS!G cFIZHn  =DM S.*$ IA:FBL( s8#2d4k{˚̦Pf3F3}m`SfCJ$y62es)HK2|+L 3or(:Z6`Fn`ʆ|H3e$룉!b4# cJ'e1! qZ'#_Mg{Ƒ\<KJ?ɡ|5&1˔JpxhB }MΊy+IĮ%8-Qn`D&gI@F+N(r !JSZ dLk4BPR1#؜`JgP8yeCY=uy@{rl z"?;D͑*Rp6!Aq>1ix#Z++'828p  v)j1l!b3's"Gx z0*!Es;]T@E0*%H*Jcܦ=Q2WF]ͳdS@ %)s A|@B2 H^bXX 3beˡ^И8`Lz\#,cƔr"[MP}Zf҆x>)Csy#\f\  &r)5%ņFy㼦F5ѦR74bn V #Υ)̡4&V+44vK6,$+ paBXq8 <(X`,E,iHBŀ"dzRIp& Hȅ$\pH%F_Q:їHʓgB!fˠs mc h *qkRQ:Cn[aVnG*EN.<$A~+9DhmpzLkE1#qXS?X8A(E; C"qM.4Qg1;H%E[_ E!bvJT!A5* AU5'Md SB{=M*Wj$}Csae)74B3 \ g-kl]4 ጩ +F2H6gqA1&ԚVܯdž- ؛ˬ 7 mQQ|>M{M{\hkۆ9q$LM(*sCnan l7pfk1yi+\ʉԽ`ノZ5r o{)K`+J:w*JIj+` lw'(4[/$H&ahM^uW9w],FG hHu][,{M%@ J {Z4$ $A # +_0|𽁘\ ӭ ]_߯28zvgMQ-I'.eW[Jqwk  kNWp4>߽JT_b*M$n GÃnضN*';CXg^ڙuH@)&wh\l #NFi~Yb/قv{,U&^qxHOfF28;2wV-E=_ypk9-̖g p5f8I荊{Wij-ٰV$ JW{y6dOqL] go}jC3y(jC<4py*]r6t"lc)R$S}R: vVXc%-EgTj#i@LےR҈BU$e!hk]'AF> Lbx.'\(|laA:SۻA n]&գo.:BJ'ޡp*Tk;7_V2*meyVEgDT$^H n_F!uKf q**'&]9XqcB/χI3*/-wߨH3[q2SӵSU#wbtlnaTopu15O[q}'J&}LPSx4̟f0}tw$ˏCyZ}T`3<~38K{ :Y9CEf*1ˬ#ʡJ,hP=t*7>Q]Q!S=`@&{ {RW@7n#lf,jMv9o"+pgbڛ=TH.'Bp8WD3$+f§r_4+<'f*8a bx lZͮ?R4I͞8#AL7n47}TҦO_?+ N苓MNtJͱ}xw& ׊ʩ:#ڟe_U`n>;k/^LB?e]&5TڐDm*NfW-d2+yuQǢm id:.yP_"O6鲺|sFlNݩfvrr9uU=)B Ay ۂ.&ծ]oYM!dYsj7WWO's rF{N+~ *r4ůro_`|~FV7i5 1/^3L9ה+s=@IWC̸*sd_arփG5&#DK+WTq7\}Ys=0~gwᯠ$+Ͽ`慷_*55/UQ#~Nte\z^L_S8l9SBk̦(a=:$gKVZqhkUwS°HV=&iFIƴ17l&%|HeJjjg8Q,!Fb)qO"pe6Xٍ ]:$7qv?N{\+jxJe<1la<823}6ɶ9KUSTř$ ZV1\:-qI$`2Z Q'* ).:mzsx/r&w@t tS1wؾޘysFV)"Ku%s9@'vΉ'S@nuWN+(&w 5L?F>nCIEX_dw܌goVµOOsKێ uUE<ƹ[ֹulpgPiP屴C ``0-b6"k6LlKݳgK nfrYLIN^60f/Y$ȿo5%۴%ZMe,'cbW׽4zŕG^<{}t؈ml Ńb 7(Ai{F"[ UɟUۏQ.c?_Q'1&s?N⥕0"I('#Θ0"37dȑw;xBc7):Z9Hu6~OB '>{R9yպxo]_fSdUb"ƿvxztG(zJ$绻Orl}-x+޾{t7;O?L.Ih'KWN7Eۋ T̒̓Q[I,NFf":iƴ'n-2x:c*y2^۫+zU֊U*CjS+$Fxy >Yq. ܮz'Նa .7AKk{(pk{K |[>ƹ"_fNT ry!s۟x ԫ .lp?wF(s8"mN٨bWqb((޺ج9!JBjm[ Ya^ײU: Uiw}eoٙ;cpo8h ǒ'ա_H{-UlyyS F]%8X}sJJlrw]bY2t ջ sR\`*.SL gC `.YzBr]ߘmBݟRJ Rz)C5W6.,!(ܟGV2VlkA ?t5als$&eAYZ"(Kz85(LK뻭j}Gh)=#R4ȴ4!sck{"kfm"ˈ?ު{3$r8[1Sۥ틚pB!,k_K!m{iX! !h56F!N]YU5DcZHc*?2>s9JQHiy(93y8Bp1k ϘR3,uh=2=[koX-u9TWr# wjF<85tžen޻dC~07aa7ǿן=}5ߗ0\KR`zg,ʗ$,pN ʕ.4bZJGOi2ICªFꚳ֋?z)z [<^ZxRXsb67WJ\P[^P%r3^2U0k#dA10$eX<`hbn\1¥( K`3?Yv2>Yg MuIQm`F:)K h8kEt,0"g[-SIwLHWjKgLBiB Esd,:V~ձ$Ecܶ:6̘bxU3%26Ynr4\etr>fE -tjL='VOSK*"V1s|$S6g2ÕKl|TQ|s\'!v~2iXjj(_oޤMm]"dKRaeq۳Z7pvEό8yTփ`'dg'lYm^e>4M;7H<;r77_.)J5ܢv@_O/z 7vGxVyAh^'u4̱Ѓ4Q|ot9$1ܓWu]20Y݀6<7tqv P6}|d0~NQ<:hE"bL ?eޞ)d^d"T:噔DݠdxͯcͱOY>E!`rDz & Bb$<}ADNpaQE#v{Ĩзl^zؔP[C^F6! !*ÔGM?h4ֈ,$ҬsH jpz5)cR{Î>S *iJM MCL6)r<.ˊpqŠlK/LPweP̄(Il `-VPE.06 An,KAic?.7#Nk`[nRU- ]~S05]p,n.403ĽD` g7ń5[Y2gyƭvpOp6C0 %s*[~]LȎ)e𜯘T;! />IkXk%Cd h BV9(ү'ڪF?G "vXs0}5]Ұ!G*m{7,3ōۺx%`ɎL5lE!|Kr\:jkO/Mi&E2C#*;zFbSh;i'!gf/-.fwQq3H#J-39%+.U^sAH  RK4) @-wMǝRn@!z: ̈okRYa:&Ά7whIhj-!u(%c6)=vYsϸM”V9'Śv-nt%6 k*طULeA7qp']4,Vm|q$Կثӡ4Jq_Mg؁6kff7[/ ͮŖ2G빎F$:$!jA4=t fR!h>y>7V!I ޖtХ,TgȬ:3iɵrYwѨGj‘J#-SOehc6=eަ@IA#fR"":.e!kXcD GkJYy1\K$4B1z ^WD8apI֧p{ox.w7FȨVZ5F2Y-W|iT BQ58fCCCeV(d#5hw̌]z^ J7h<3x0qF:ϔVCVȐ5Tz{xå$R-eЏWg=K;u:FYKT$e}E q-~ [2KǴx=@֜]8i |o"NNkXJ=f|.87QhQ DhPL./D[d>OYZ/[+QSaܗس}HZG2Mv1:a$B$i/i\ϣoo̐N賛 Qmy*_U\4B>,WJY_28Yt<΢qVw<o W$Җ@: z1\lɏ,4cc;QKˤC!0)}SZ9n;%vPFzS*xOPCmVBLv 4b @}l 8Cܞq껓-~_dcQȅ;l?gbh(,S:S Ԝ:Yw5VK[2mտ !\JLs( d CfR렼CNg֏V|JÙH6w.÷HB5ZZ+Gôvn(+$utyy3fEط3f?@.4s(A]$ ;@*k3 +O**//<gZ6/\y?Jr|c˥yH˫Q7O_JZӜH)= ˑӅS @hNf9Y};%J)VODu?jRMHFܪq;;I=9LNsbsy')cucYo(+v` yœb؅$ pҘ9E_E%+HcbiΕ\9J&̩Wd:TkIZ\Lj'f]s.[E1Y-}AbM"ZqC=ք2$5cC ΀N1a9sHPUyN W fHSY s#%JF@>W'* w%Qw!gf~[-M% ;d-'=V٫a!/I+}~2`Y>L{o17`W,,}N'`( O;|nc~glh Lasꨐ!^4Buu 8RpgD hyVQՌ5 3jrjPj$#iB;[˟3;ohN3\52v#뿾·G)ϊOQ^-2)'s޻V(pÄ| Y''ϒ1ŠL*BDe>#|Z`KB/o*>pbo_PY})1`KkbeqYXK4NqwYm9SH1ZI$܃Z素q8SZ`@ԅ"&y{5hp&!@GU @&*wS5,|%] A [ ~~U Pd HQDk4VDL$ԃ* A' $x0VڲǶ%'r!Er"C0 cI %ߺA2g`_+CxCKeVϡ}YJtO-@qcm^3Y^7 kɵ^EךwR}YTfB1R:?+ cۄ A_(bbX(S(E\% +#1o:8*UïxE">'Wge0,Lr>wBKVqH pU׵j\ fɓHaZ|1VӪ"Dø]D? #h2 _f8'Ϯ8})v~jdR<3%o7!.nt+ɳk<9dhlS)v`VXf6'1rU; iPa N( ms"@׵H]ɑ=ꤤ+uaw*ɳk>98>.ZޔiskkRyFWڭ[jA7QOޤy/meZHy2tWJTQ R?ׁp[ m5KP'M{1(ϼcАTQaaCY~2-kS,DЋ՚n]x5ev:ޮX oWn~άs<68G)X7O_fCNP0>FOad 5/] =~GobafstU{>M{>/؁tJ.]P¹S)$%/CҹJҊee66c\r\O-NTՉG͉/z>ЪշtTF]V?~o9 μcyF% 7<% ߸ r̲F;<yDȫ%!67fӏygT 9#|y"М=@m16H >+ϺVE 9M`V¸JF 6 %&¸*$F 2 9wܾ>[[}=&PxZzt4(eJ!hGn&-H1"Kvɟ'CTHgMQ,.?YduM_!Jx^&2^N'~ \9r:I?o߅+wJ]|WyLK^0SH/2yDp& e F !5E|/p^,e)*_1E˒#GF;rL>fUsKENl˛4\+V٩A*vgV#s5;󎭿󌊿7_t(Ff5¯KWoc!D %B)8<z6%8 +Kc)$_BrM)i|)$-^5Spl^S cXEdd8\j?"s7yF+9Al;ޅw]{W[ Ћy4FJ"(FI#mWV{ r 9E,' FJI}QR&%xсk3͙T8|L)ƒZya!j`b8L62ϴs#콖(T"/ٜ,5_[l{1@_$ K%K# >F|GmfR)rLi;mfc!2//2 ƀR[,y v8&;}HgëWo^_Hl[4@L]IZIaUGޘTKD>bD3)evkL>xަsCKqw5fbth{>|{q<=DN+LtO=& I)Fzd2Nv|oo~WʭLV.HAJ D1!a,a=iUMi[k]JZO69xd }}BRu7[O6мw:oh09tɕb{$c J|}pyxv|;mn`Ǭu>E+rN-Pv,]$JnG~bV2<7?X$t/u* r2fпpFn|1#>f& ~axqî5A{yipnOch(t,RŲRذ˷S#[0拇7Gv(u-Շ7/>Ibas*^bC#'R x]lMW ?odfi^޼fSz"%A0_T![>NW/o 珫G<0,h kSNoG #8fB^.pPD?Qs1sLjISxj {P|E[:zm#wrR<=ؑEt KG`O3~\i*'XTixQR 2iF VTgD:Nbd(4cU't.[YjuήJ fN.)u`kUexNKj&LudTN^$}(T80*(g'ĤrC5"%mX꜠J+]mf2qL򬆣-T KP8yVC#EjXd.E޾:sME@cHZ'<j_G"fOT޼й>,FTڦr <>{tS^O.^i<CNXo5Vfk4ˊ9yǮ%m3-iC$|W+@ O"F0տ|s$;-1Ϩ@7<r_+qG!3cDN:lc&س=fLm"^8%ٙwl!yFቐve{@S}'-&pk1WZ#gjZcǤra5ʌC"cL;pO D*tlޱ)y+Nf˷,c@1;_bDzsx8qώ#R5v}19<+gv]O~`=n]:lrs"jpEPX ¸ 5rj0T؆QBi,lfaqنQB.ZqfzpqEHG:D\ƱN!ׄqH;HF2!˔Gna&2u8JhLy˗()Ty:Z֣{DNq cq|rti2QJ챛h\hG&2,׃x.[S.|=U9wgB%~-;%-WM )8}5ըKpj^sû'Yx:$ʺ5N _Z 2_KST)_-?-Ղ;ZJ9\ Pɱ B/q +@qDž4#%e1Z_#$+_T wWglt aw-avhGY7˧FUK!=ﮍ'ϭINrrTP]ܓq8c#OFFm`l{z^ VsyÞK0.y !7s#z ]f =6!やQ%!H]N7"f"Fq0 N Szf: %MX37WQ 9Loa϶R8ZIFO}IO#'H !΢8TK2 1Ld<Ʃ@A$A xC81zѶʄfeʮ$"&a92]DwJvHw!<^wUUG>7!Ѝ$ά Bs/l9Qa%:L7]fj͛Cg6 'W%:ɰH!,2JD >>*$FLEJd&NH,-H!drQMޱk|撜+ &p>LM8X'¹^$J>e1AcT`%iSbLiadF ~eJLl/rBD (E{K 3nF'/>ipj Cmv?2=!)_ m0%&(IL̇y0-|acrpMPfq :"YD$Y*⊝6ӆMN Twjgpv8%2KoAb1 S(I3yku\hV>{8eȉsPu`ЉSBGyp匴WtoQr^)),RR}EJʉUS*{[hr@Ljt:ʜ> id{iviK[unDFhg/a#]SiJ .Љ38+]xuwZKlV_;`,22T^z4BD=_mߜemL ~l{G8i܁Z;ϺatcI^^GZgi푩8bȻWK[JwUAaoNzI09CJ< \Ȳget[ᛀ!HIM]FV^gɍ;ŝF16 lg"l7Y/ji߳Q򆡍a 'NQq T#zl'Wꙧ~O/}}$Fp4\|ktj?`0˪j,B2hV Ԓ6V1~? coX%(]AQvS]3*x4{jGsr 9lENca*W[JEB)beQQa|r%t,Jf:a:U88q]y ul[s7y0+랮}N9ѳԕ|5FiA9DqP,H, b:R"x (ÄpQK{äJd :x8r,*[1RE`o2e{ۀHKeRAkII4y{K-j5$Bx{GK BK yWZ\gpzx/I)z~x ΅rĊ{{g9+KLA/߼1J;6! +k \ݴEaƻ4z#:*wWr{tQIWA܁߷7xqFϳ;qɻ")ESЯEJ~NﮈuNب 9Of]V(g;XYwtz#suf7)ݙrrbn KjfFVFBI Pv=rL[22mGxB kxvEnƬdI]Y̲3|ܲ1)d"mTE"P`\cdj2e,1Jg8r.7 l]+_AWk! 71V39b5*ꎗ\)x}IfL̆g;>g|EqV9RnG:I$I,TZ`FS l'I,x,)syii@TsT)*l[uΕ֢=(G9ߣu ʞ5fCr4NTΝ ԔMh!cypn~^), llEpb6M?&3ɟ:aT R3Odr?J>]/WWUOoCWdZ-B=[Ivb')O-+s"΋oPd7>CekY|*SNmX7)/eTǷ*֭ET8YHM[72%[򅫨Nܲncv 2F]^9 ۴u/SҺա!_z锓`pG rvh|1(:mTn"Snuh^::cۯk4ZgRPxk sRKMدj sK3>Nu$ ;n8 C8֑:8Ap,F'qB#]98AH)8j Z:T5V;OU?B}`7D8km4ř&qd} ?b$T"{˷i¹yl< u$P4)MDpO4pU\5WmMkpcS\_ 49>M;}}SތRe2 l8=<xdyKaW93BYR_> htk(\ Ol g|h?1_kMlbwAMˆۏL DKhi[pВ"G8ۢ\@'p/~XCe%dӦQPf1ޘt(8V, o?(biFs'#4c7 Lc㖹E5bcn>&iͫ?Yb/rίd4𾼔ڐPKuw8+C >1o `)Z$`Ƌ/[M b`#*3G<2{na[{u-_8 UJR2nsPL*F"DE2"&&(JH!̓ɁB^mQdMfZW9m*X^g(Nm5efw0nxL!r =lwjjכ^R="c?ZboːeۋWL XI&% Ht4&p##Cs$ 1ş^&O[]_ckeoԕѝ{[+3OբQjqSlͯ5xh%fJʀgA|[xF|#Yc|? &֎- U[l`1J!#( qSSMF~wYB/˼rٰI?Q`I)vȌ!GJ()!8$IPI2 .4H1#% Ƈ8?O{)@۲RA,f̘8qQF$S*B#cD "Ņ( [O40OK}7Momo/ނ[}Ϸ0t0oÅJcyfBSVec]_zuȫ5N?[n3T։a^]hd0W*'OB-M1nw"^k6|s{g֪`) ^ywc χw`O;eCg~=?.)$^;<`<׊"~BvrMG=˼iALbn#}1VK,0c?qjF2)nz84~ |hIf6LSG9桭25 1&*|AL<ՠ[zdݫfHb˟GKCh6ЮJdqݕ;s=\=m(ӪYi4Ьz_,HNC3_ΉhqʅMq{i9h,wQA_ÛIzg j|#zY"VwbQU@6AX7}?T@9X/F!}lW {{3i@9TE2>fQR~1@K-[9^5uB'V?iTx H/3K=/}KR+P()-w:.T<:ŨSyFjnubZ ;srd+)up5ɀGֺn9 ,$YLE)Ɗ :&ya*F)ɌR1%(0J5*ېWoGdL;7a: 3 bEtdJL!cD`7#+o1U,4C{EAϬKq,w{Kvk>4&+@BbMO5G?XqEsg/T wϛ xNB?b_s^eɳ09GX091FU﹊: I]3mZ@Bg %}Ta-U*Vj&v|3h_!# 1F=c f`]?<5WL*rvc#@jaVa 3'ߪd2>ha@ QUf\3"M&)V`PMm*Ԝ* n 32h{4NUI4VtW*]`FŔ|m۷vu]R-8:&8GӴ{`:bLF %SEZ%9on{0?*J QߔF?qtт]Na8,a|xE=aSYZz|z\GM{`;>]g=Iȏ].9i%P>EXiiۃy6|?{aÇ2'n_|_ėϕ'^ç^^>(}Yyk(ny5a߬@קxۧ^KtZzͿ2"ӿw߼2yT̋7o߾i;apoúӻ勷ܜ~N c^m~$z_?^];$|+qe] 5dҰ1~_780L-B叫 : I{EaVTaIp#4,?j #;߭NmQ&Ƚǿuyѥd^t)]JE?%N ^}@WԊB nP4%]D[sّ~jꋦK.¸I"߽բ!Z4 ?K+[nO[g;unIb idVJE&i9겫z({_:/$J<5j6L䔬>8 5Fnx( mHv &. ix`CX26/xCX;K*$ސ^RTph@w3F&M@TXaO Fjcn-7:١AEvq"}wߠ nx HZ - U8] :(sfuI1g'a!p*r:Zswr*蟽M&(Ų'N*5(@xdwINlu:VCF8t0SSRAjBe5Ug?V(¥8nuԅ'ͦ%Ԭl&O)rUk q 1 ]`~|1|0`jN&87uc3*1)I)updJũziE~) ]p~YT3jK_d*jVΤ,ĐU1%H۬Z`18 0">7.E@ujM(YUyTC橡IU5nM1yYJbca`U5Ⳝr`"dՕ¹h@.˜Uc4It b&g]Ѧ"J):!ZUm߼H'ͩPOcRڜA6z~90%jt!m-N=DAL%Y:9ackjPs d&/Zvv vUSpAݱS(9%O}ԽަN1oCiCk/{Lczߜn7?MZ|}zqY/?xydOW8]F^1>T.7Qko]i鹭͓5ԩ}9{8a7bh`T&РKϚn2 &נQb9P| 4Bpt p;ۚo.8R*ws~}R_/5TIW_)Ǘ_%zTÝ$;ogJ#2j[c`eJ+ WQLXylVq[Pb:蕐e(V!x{4F< n~s>,P&Щ*?/9aqƶ@U!V(VgP}衴1Cѐǣx4Ϗgnr<|E ܁wNٷ\g35?s}(][ 공]')_7~B'$k9D~8с)go!q 6]ibӿئy8cU- wƄ%iR?l,78Kw>N(_M]PnJ =v Ɛ!x4#v̷?A J2ܣGѧ>ѓqy)2-2=a߈Y7Er}B5e*)o뱅%5$y-p1X7ޯz+xa()y)aϓ do[A,9cNws|f2Ip f2].RJ .TLVMЁw!{45{%h~nk\so1[hgc1 '~v,CO̳u]17;f o=k8soq>K~-˄`F`E\T\T1D;6Fֹn[saY-3 2iJ:Z$%,{}dž~-uEٜ 0$![@svqg0 Qz|2>3wp[->|"g;;&h* k1X9*b6EKQRT/Ջ~jTXKbmK6C9T 5n2\LUMEttȻ-hR/Dw{Zs:R1Z5n72.n#ȸodXd@8%kEcMu寋K5Y,j` 56Eԗ?Xӈ ILlVDuװU Jd:Azw`]f73Kn={VX+w#́w= }3//=33'f]_d lZ:{@^x?wWF̆1v6G.'ըUb]%?^]>5 >h>O T{酳dzܸ_i ~J=$'/Y}a(Hq,֌(%"jyc ]"Y,\b{T>n"c\Հiͫjoqv<iPvk௓ɮ- ߏG`x߃AUOe"m۾w0(m :t\Vn;cΟjexnNS<z4k+4.aNi2fN}|s(@e8\$iofzjloε>s Sm>?S!9} ³,Fo R&)tjk ;6{ixl{f=chӺE]0J@worfi.F,F#l#8{iAyN֤KNѴ Br2)$LDB:/^g%`KC/>TgLC"p{Fo$gg c!<LhVܻA{#:B}%+Qܻa&}ާ7SU،^M E;z ]H$f}!/7;ɇf aEY%AV*288JHP2Վ*J(H SYTT@*5- )jV!eQ@_ؑܿ>H3Q8Wڙ:5 ; IGLjS!`ĊLG{gwuJbqDKMRB6` 3͕~dfW9+ UHEACDPZ,eL rii@`{ ׁY3I塽;s"hգN쵪قS@N8f3q 坋g[}s+|?V [IBl UVYUhB^#}*\c-6>X3Agѽ'b,]NxG!V%NBlk9mB##.1+W$4HdE<`XNm p/pteKP~s~i2 ?TpC!@Z5A5\<۾*P9.@*PԥTWLBSMy^2USE$+yYUa0xMNmwf0@}y Geb.8<#,:+rgNKR =jBksYp rSB*f|hn"& 'u!V){HLZNZT/^M!bUC͊TK\N`d/ Gz|cMVNS0reCѤIxU[ ޖrLF7$Zhx,8{*n wo~:ŞzMQ=*WJyQuNLOSRjcY-YŋL28.*ln~ffyka~P#\iӧI VVjh?ZPSWix>٩ֶU{GL=YjUв.bG{2ڡc>$М5Rۄs=6Q5KF;G#X x~a7;Cǿ^Q/pE")]+-!"ϒj #ʪuQP ]0]nwe*y]P ]ty5޳c9)=rZ %>vC ʼnT-31zbkwܫ~-I NtotTP DvĔ! _!&YPX(+de<ԋ *Y;.87ij0M䫠Fp%*o\+)ċs"IP tUdH$#c7Qtq;Ht$ 6u&Reƃ+DW1AUjv!~}]-ާkX&=)=*WaɈIhpdp t@RJ _i|lN͏ \nϖj|C91g RY\RQԨ ܜh5S@ | a9>a*Ly8mWJ*-&WW+0[vL5qJ@1!~?b$TJ-*6(T _(∘SH+B&=ꞆQfjd=L?o`ݙᇋT/f7L-&?/X̂,X̂^O(iƄyNK\p%r(*D(9jkW"BIU]QP2Ϗ+߂?Ej+rWր2哶 ZM+7;ea0_QP?Y8)]L*)9Q+(R8ݫ@UTܤ\ɲbJjGd~\ۉzNܤkR-a?ռa>׫׌mR˺MQ6}{wl4}~յw0i,evܛ3--'\›FIpp)x2CBc6gPOqjw>* E®b"ڱՍIUʂ TȂqYX*0e%èPU*9McTmAwjJq𨅃!^`"7 yP/BBW]ٙ*KB_Rg}Iw؄HM ]Oʜ"x=*G9ƴu= /B!8[t^bG`8ͩB5^qv!\Ewt81;qtNжRpճN:0:mtr%ԝ[w;f6̵djZǼ|nJw~ҼVITw=sÝMP6aϮyFf]d_r"Sd [kNtTDxfʦuVL/B@Vr\nغp cX8*!EV"ψ"8ˋg5.1̩-[|[Bj患;CsXX4x Ʊn2Ճݛn<*֯3. ջw;O;OVY'3٢K)9vYv7.ݚ{aڍ^,[N@xG!X$T3+;ju[E"Mⲻ-竫ݖ:m ujz#F͍0Ѱq$f{iDp  #lg'6 m"Δd5BR=~Sk(%8 *ҽݒ ,8hU˜yYCbīz֭@EJYsI0RRQ,JDMXǃ}c'x0}x0v8.s ]t8"Rt|}}/9i0V_RO {Z >ZN;8[P}*5\d uAv8ixYKL#VX$+4rvbsH=ֻ0L[z=xp\n[SdIܽ@$8l[{9͹r.h\ɈʸKS^ׇO %J!t1#s/t SvWD!&dl1%uOgIvM|Ox_wޜViiWӃ QJrE8͕P%|nthم9^Z#tH;:&0V_1Yxy2X?FpB5 /WN.x$iCs8 xі,E0s۶Y˒ ɳ$mI80MqR" ;>+sLUjfPLTnovq;<~3 F,vMKY̛p JR9ζ-@}"a&ZN>% ͆f ..&eN(%3X#Gx״㙠#^/#7rTIrMAΝH:Z<:)bO"[cZS*}|E2 eLE+9TXZ2,Rrb D>۪~Tކ:IGm>;Ouv[YHvm%>]u NNcެsx$9YC-h#b}[ч֕s="߬c A&o.JeV&/{q*D֑mr͈Lr{-kh⼫߮I]4AebI\51(R=h#ǩo[7.=_OI1 (Fj46u db– Ju ƄD$2β ^HbRM/~sV4||&mFTKM? E 5a 7̘ffz[b$⭫-92-ko[.yڲJeSN IY'Z=b;ܾk=Fl0od_q٦G Ɵ>>z9p'qƝ cĔ+`:JoG#)H]9AV`GUq&&2y${/빽_^P MÂ˾Nn{]RXqt=s_JgJ{8_!e`w> H'/zK"Rα֐uثߩ^>83N/)Qu `FJ4|uǓX tkO7B`O"Jii_iRNsPbѵjBp_3:fŘI )c!2fFdvv_tW5@ mlЌVJ2$"Ş:Bs`UwA9UqL됺J^N ڠA3%K zJn<~=6P}! F{oӁ"J+{+hFCZD=CEle-STrmSH.IcRIIYw!Ex>y$i3^w_:/ChSfF{Q cZ㒝ԗnNI9)䨱^-*vz#oh[t7 CZ}읟^=(*vYdwra2.3LP:#BG/.[`D%դ%a(Fޥޚ5T֎Z~H_;ܬ 8VK-͗hp|8-e^N8;@[.m4O4]N)LyhmȢ0 sJ?{`&Δ:h|bE}JZks oGKMV"|ّ<"Z_uZ[;}}=.)޹;#DSaτ/N7L9یTa%pB9Y pܗbMXrڐ -5੉ŻonS9 ~7@jO#ϐք-ےs4Dݱ'K-ԡዚOMK*{SN~zB\vUK?SeNf2>>Pg8G0rT:$˕gJF$V] $!ۘXFI)q13:%m)Cr(7AGjPg&8A%ډBKg ӹc4h-+uݱ\ p_/ 78o.ϸ"f #Be 4p?ݠ %,xb,BSP)xSaJ|^ix0>~pN@)knl담<~{>~.SZ(GSxnYL*p 3/t)xORG ?"sN)Kʼ :qj6nxV.Yy6 F}ɗ8g9E'FCk,.99cK QGkAcZ|R/~y$E $zarA, A E_ZpQ r|[)ͥȷ6G?O.eqV*4S̰ڇŹco& :!Ed~Q:vд6ͮPҖD%K"0ĨĠ,Wd[ %4txK+`ԛR EJ~퉸a׊*%$!-?,%Rz+2C _MPO0u ȼ,7`\7wpw㛌H,:2qiQFC=MX$܃`}QCffC)7#tHgӑ 'P͟9} 6Ɖ36I"tDy^[^wm:!rwbǺ\\Q}w@I0EGCDJJCUw|IjqVI;-@T|v 0>s(`D8|sqIK!uDA(T^bTkLky "=£iQcZ񂃶B$1uT+10&LsA?E+%4Ij;9ҫTud휣,[CLݲ,͊i42tXVWŽ姊1)Zˆ0ǙjF8e#w#8%2[l@fRvF4~g9%!'PKX[=%J(#Dŕm6XRςt㰩wp3n/bɒ4VC$TB4*a]kIϾYwls߬.AҾYwkx߬@˾)־YwJki߬XyiQ0ʓ@NJs6W1kKrf > 9qbjkp9cAqy=iEvG[Rkis%<x;)M*FT>=+$YF]mO3a"˴yBQīp@0 qTA) Ш!9KN5%ec~Y ́ 1 MP2hW8<q:zzu2âB2#1Y. f8EU9 Y 䀠97^[ 92]]GjxiP g"Ky"3hf4L܀K[ a8ʼnFj WK$Rzk z1QTpt8Qڃo _JW ([#38c%xk3xO8g-ɜuCwd-,0q\b>=0JI)E3ꑿ_D?Vpo{9KDwg [^|:o0+~|{6A _]'Sy3YΕ$;78p?d+L1JRrt}WJ@g w7}qM/fT5̖Tz\J)1$:2suGAq!G[xpZ<0Q54Ф_".H *vVPICjk8FA]]j iڠ荒L͜.F(ݜJn+ȉvYl-)8j)ѹ&t U3 (,UJXHMmL__sswq8މ~lT/JRVL BTXE1)HI퉆X3b}Rgc Vc[j(zTu-Z*BEP7.V` $O,[Ebn - Z9X^=([B<|u6~~VmӳN{ˌ3³rŲ;30P۫`vۏ?Å#߫n8>lRϮ򆳆_>S344Ze(φf6c7wЛy>]=/:sdk=Jjzob?]W_CFxFƇWJWb?(yڛ*+ǐK74QDi# i_weHShX{ܰg<kC L ]*coJG,ٕ}H*e#`|ylӋ2-2E%q&._H,i]b/5L]]&(!^0~|g c˴0acR<݀ yz*ٛ'Vd7(~Y\&q|KwaYteu>Om ޫ( JfJ5~8.XKW.#5mx?,/\:Idp"w۬𼱐3V+%qe I.hK[my66/Tڂy-z-=}LC* iDQ t1h P]ggf 3Θa2oyJvY1;bnb޲U72A=BSGQ|PYwsfvyuSaD3.OAXM6彺ͳOYU}G y-;GDs!j s2\wS-Rِx%HT?FmuTCC8 (X*]`0>tQt̰V\BējŠޖGcZШ[kcN7Vm%:x[k'5mxֱ| :<޳DSix?jDZn5 2q5_#ט\$!z>܊٥NU-eLwClHJ3r x|>7`y(7 J$.NiYKk%j/j"' վDTUMRC.1ӍvNov"in,bB!.LcYͥԖ11ObV;)%s(AW|y|Vj$uҌ- Iˬτ,9EؠJ#&. K!O|kZe:b؎c]53)r5{a`2 $Gcj,sa\Z"eA#ЂɰR>5 f$RjrHPWOEƇa`L|-r&4=5f>4_.- Gxj'^?>t6{\$qOx2-~"ޝݯ>[/3 K/dsw]$==,5Q>9go'?x㭱>賀1 ;ɓ= /_v2`Hao|Rքq?֬tc "pzXp iǏ@Al.nS_E>6Y #X_bܘE.ύU;Vlaf#L=G(ʙ%!Q5x#~GNspد"%EC/O#¯oߧr3A/[!{0]^-wD}@tHLc<;B1S`t;`\d(#HH52, D"\PĈuK-W5JI"8ω 未`Qn%!0LMr 6U(1\t3Z=\0 G*!wԚlXsDenЂZJOHdj!+=d)JJqbr-A{M(4%\6r>B΁X3xe0~QsUehT0Vjk5NO[*$l~=˜l>(wzBuO;_) ]Aw_}>xx9]~LjϾ,4\?'fY/ Ȍ{fLaiGnBva=Z`Դz;u"ƦS`] mH?Gd;Uu}zzb)WCyܾvQRsK=Ě"ݲYQE*bqӷV>T&RL7 >5sEU[4 sڴW#馶u]}.zdJ>3;u壭+r m/DJ1N{4Z yx 1FX~͋ͱ=3̄ח<0AX>ik}m6>غ6$h[y)|5Oz_6ю._lë{iG3o~hihQE!povLط?eבٖ33u8ƏYM4{ڨ}AG͢Qh9<~{skok}L՝ױ?PB%;T5[ $wU`/:d?|pƸj&1Gs ;qjkrbK7aY\! a z8[r?UNiiʳpuw9=^`עٟuP~~7R߱< mxx~goߧyl3ʉċdyå^Tݿ_N pGPRcB~QG|岸>zB]yIc!'nI6E6,9лbc:ݺ;o :-Yb-\wBNDSlJa43!:?u[Yg\5!}&-+YNpu1rYl0z^L"|gj}}SV~za/7|?|1λ~""wTQ*il_T[v*rEXsb\I C5ԯR*kVyN.jjQ{=({ }A 31&/O0 8ZnP{l MNɲ5ٟfТ:[ZޚkeOe42gЌ).3](ATρvφf%ٞ@l!`qU:Vl>XU.RȖf+V Mu,tݤebIAf~uۿ+i;>U)fV/J3=OZjGEidw?jG"4o~LL!.08-mIxݔE <ϘLTW cec!ј&B]cNPM 9w!ܹBkwz2|9ŭ.ӛW #Q|J m 4jF{ O+Aaqj*>-zaF1և \/g<Dk0+.׺=-樿,dcJZhXkW]&ރnj?>C4 wK«ii9~qUU(l=H ?ұ(}DZ`#>H䐽Ė>Fe6HBT-Bu#{Dq>pψaaHb6)c'0Zr؝V4 ;65%$u&]+ @޾ҢK|_Ov|(qܳu;XٲZ9-,(cH3DƬ]冞x}hK(&. 9i W3D͙CB;k2 SQXpaS8so`+xb[z('(d8c*70˼")r!:_LJ/UJ 5a\gH@~hUr÷*3r]t<}Swrcمg*B% LIYZfS<T6TFcela׬ dH|4Ӂ0ҏ$OnK^뤙i KS /!"+狍Ⱦ>4^f-e-_kr $ū9$Ā^Bpj}*@C'odqd&4H>:efl P|O2EhwZ]~~OO.󮠖"YX X )I(δJ1#ց(*4He+oџ},f%dDTumEw.tEkеEBÇ1eW1k8Zy:NN|^ph$Uv  䊓q~{LsSkEVq3])bCٮ(X%g6\¾ KET]Ҥä)dΟc܆f5a!F1$H:˞>VSlޱTQ";8t`>>8}tfdɡZq1W7Ǐ"74XKЃ5#݊u,ҝEJ(vL'2rX$`(QT9Br7T&''aM"{*d((廔kbOx"BD>5̐&ሬɼ a]2hT~oέ@YP~65k|yRk ;8U"U9rŬIh"Aqk- a!W5X ΧjflvOl\경-B%ٳYK1Z[QiÊk6@Թ-z!IKeFLR_̬M^ߓZىɻ cWdQK B΍?:9_ooT\B~?y)"N~6Y,4B'v;٘IxPv3pi/X+]="$] <0C"*=mh62t%FZ@p"i|`xHb<{뛈pqJ m#0b!Z:+19\*-ו-GʇظqG 3=dluz՘Z>B#ie@oĪ/`zxW#P~vhW~JBp20(o۝aM^i-}/3E(Qհ16oPhU)+Ja}%}2쎑dY^޼K(9!1SV/s/9[ ^5[e,ny靓\utNP)Q5r>4sfzCWJ\Zzr1v. ]y6I I?j6.ǗN&ЖnkhHfsc w?dn8hH@=4 "`ZU'0HIڻݳ1RNVfVO0JS 0tIz8oyM3=z>-g6g`<%L=GjyZLǸ&7K}dgV-Q3rIԨ@ %/^V^_a\McoZOci/,c&R#GSD9ol5gC4gjsa[P+w//e;-@Y*a}u-2N0=hyVjQ2V(0t&3ϛ/؇f+yϮ˼k|R`12'-!kt 9sQ\;ޤ'ߡmo=l;;ajDؔ5+9TINg$ Gu~6 #7;0YEbpה@\AkN!pNK܎Vh ]I֊ڙs0^1rf#`Bt!*Y-aiJRRCU.COZ jWy:aOkTqZB)%)ZH65c.یX@NB^Bws^)AϿ ~>JR@hF =vJ^F-Īuۤ,QaJ`Jr~sFP}t"sFx@eUp喾l4dӡ;&K?:&e^"Ū'M towYGKBMVOy|Hzpٍ7!c!N!0-udC~=6ϹU .DUpR>QC`6!eg:=kΟsΣ'UjKLvOMaX4vOIygӨ՘aׄx՗XJ0z{ۉq](9)0YmNr;ŸpV1˅^fM`=[*)Wb'C0ǣs*U6/X9gs-X[~e;#5CPnI7n9>kZX҉p*мJ[a?Ț\!9ڢJd&Lju츎Uaǩו*Y; b`BU'7x[Ƥ6.eVœHenj&:YbӘo]O3U; aDz7@nO0;3dlZj] ~Hn3WrY{nRcNY_@b fPj*UaI6ri+vzg@&3֭s @ql56+IUBFsQbkaKP뭐9.ݹU\I0ӻI7bxu:/ܛ08 zɠ1ԍlAX{g@-c}vJ0 h;IȽ^6{s\aaa/ƨ1Vo6``Cj.N$4ps+spNdBK5G2u[ѱ4@Ƥ~2(4Ӥ5˩9XU Ih ύ[N7qoǯלF:\pgop 4=o{!<ä·Op|$;|_={iٍXZ-V䵄-?kHç ZWVjZhE < S tK5|c⟡6i"DWg?~~/ή.2)_/ή6śW/7W/^mv/6ϳ_/ތꭽ˗/Ϯ3^~+_g/5esߌN&?;^u6 z}&1bJf^|C?{B/a<=aAyb~ʔ9' ]7dZˎ{!ӱ3Չz=I].MC1SSc8m:b9u/W=5H&*04^],su{u f0H1x\d:iFݳ SW^o_!sgxV|z:+.bxgw2lqz~BNrn祁 ߇r&OJ~lp}?};N&|98~uW n)|z8K8K 1U̚mjګAM$n?!7';ųrIE23U̲8Zy:NN|LaW+= Î`J2EiIkPql6Zf2ĄU x8HcLQd wԌo÷L_;hvO~&$?I|᧚ \y_f/X?Qdd1lUռ2߿_iª旳,ZMB ):Z79<$BGf &hݔcqH^Kk _=.Y9HR;?J!C[PPVC)Oh-'cu<-sp-."!KnO=BGKErp]'`iEp= G`B);Z8h7\AַAx1$W2-+`zЪc'k^౹)`W|!f !+մ_22R`}i0"uѝ!. JC@^8Ƚ{+qj:CwZ:ˋbE*l; ݛ0*{8BQFMHh{β#iRђI#)SuV4}4_$cOJ휵<-"L^׎F1E($&XNOzͤ&dD҈2Ċy 3"*C'pNuv k yyi-+SRDSLmizKmi-ꎏ S@%(lGOVm Q+="Ē!ǒfy(.z/(6SI~u:u #gJL\eTu1m8\Ӯ޼%0r`{%W f%oVBCqw_Dk[Kuߺ^`MF /t8U;-R8փڿm7(0Nd15xweqI|Y+. Γ!)&),{}# Y]GW)"EVgE~Gfdi*1%3kJ,R2fFz]1F]fcbbn?aj"VJXT;íAXqCXd~`/y3D)'c"[1g)"gdGgA?d }=BL_>E<#хgӹ&B̌g+ʯ'G"V#Ӄݺ]6u)@&؂F8$cz1z<&t_<6]1`LM;8ultoVnxtr#@K|m9FYP2].d T5GAh3a3^9|zrLFtѼ8&=FFńK)Nz1*8gG4 ֚$rb$  NjyKQK,̶1o˛r.6F"t;~\6[#?G {=?7d; $Wvwbx-IRAZYپs[XJ{v:'6*V;F1_9R=T47)b0Xw34u'6D3|X2Ba4lJ`C4 Dj!:2I OT+NEe/o/x{3Е닶_\_0B +3uZ60]eo޿t>=~ҁ>Idw ?-uDb=tV@coYtU\ 9[8b 1Ȅ_ܣՊOP>+1qkp>y0OcQyOsD 6JQ3N)¤%xa4rs<Cr,^iE9Ccx ǻ#ۃyXctaw.C,¨[9#É/]O;t 1rHڄ -4nֹZH.dثExAz Z׀5f^s ]BAUkYk]q§B3?,<ךA{f9 Pe.a(U)HTVADJ&Z0n4 ;f×QIor]IӕQ*#(D:,ANIOs v25c6D!Z3dexIEf%!,r$j5 }(]Vdh ³)DDo$Bі,KVkPymЅҠ*t[AoM(ONGE O 9x׍tHNZ{b 9.'|Ӄ'Lɖ?/v=pg}<\vVN ͆VccQ}zW`;G&rk/`8\ujZ >0\-2W'کܾ_L-'pZ-a\P)4G`W+W\e{%rN"(@KLb')E+#™ZBUBP&l}>)ԡo"SBx>Jɭr$tt rB2ezʡCϤ$G %,'1e.i %NjtmB` Dy4"ŤЦH\M9-0RB. ?o&\KS rS@*QFR(NK0GԒpZD(rf)i (8v͓hlP/7_هx~YWY-@/VOe2Fxz\d|;ޱw߮8?]\oAoߞ3naj !ǿ}s~.V~Y*9Vy+{7?i<VBO] q! Z8I<_ng}W< \.jv&eh peZy0IVfE:GG PLH"ZJd)}*"ZPݣO"V"'dCP{+"$4b، [b wSI:S!{! 9thGs55mz:X`44r ½PK@j{&K͂$ f[7D{ LSh!5Ujvl#ؐWcNUՓ..J`Z9gBUJBx"a*! ]@3%+ZOJ<`MF>pE\h-}* zP:Xs[|Q:U& Jٴ PRgʞlTqs :{ h;}F 6U$pIx%+U$$ShˤmYtiiðBhpHG(~~+cדE'I/.I{!s"FI*/42vB6r㥌MQ`CKC:1#gL"bk͇ %F-c3sMli`"\΀bZv5z0;@)0P,H(lv6jй;Hr׮Ne!e/N.({qwM_(@U,oǣ(wXB$oReW5ɛ"E򶆷iPFa\Ay;Qb ykwM\U2J5,0PWk8#$MK$($ 4\Mdd>n.8((MHEU=Y6I=v %N_Vupo=?[|}{~ryL/z¢^ʥ_>y5O.XZĂ G/R/j $&` ~RUOꄝcrzvaT{48?Ǜ^=|-A6kdlE^VH/:*ol)-,TBhb$nuą# 2zc<'79ٙ O)|G@; pڹZ$ 4$0%G,SQ{@: 'V6&聛>r |W7Ԃ$1k jOt` )FW_abx F+ə^v dӠy u/~0-Fr.}W^r+qݱ׍AJ>^k RNcn([t4\fYjչ\l ]yZ_`?DJE19'x=RYw*:b+CԦ49Ƃ n*8GlUnʖ>ky"[1k9vS.ӑmJJ`ְݬX{'VvmWn~:vf͌[ō|ѐ|G%Ub4Ĝb-] DixVkG'qR;| ~Rqӓ3@t<Ӛ ^h!laE(eΉ; yprP094z^޽S޽cp 4|~t J'kq):hਖJ1:s +ϜG+o\:eUMQTwzr>4*?Lrݍdn'rNl46n#0 4iAP&80D PV B&H ^kkNo}IG%?y_[ZJ,5A$Z,uRI >:1ypd4oiP(F4Cc&, fATb??![8{qUni,OCYLl cӘFEWqa~OESjhr1\JȁmNFm"u}wG< ,ꄫFhw'x"=ȥodb_륊"mfjo񇈞凛\w870C0#XYэFԶ`Db,zTꕠT"qYHlOMC/w5LA.ʣn~ǃZ߿ y0$unp xfbs Ž9U9tQ+hd6-\*f?o=Rs.}?,1ݧk r@LWm%zMcϠ-WG{;`õX޾QԢ.Ӣݧ+˓.wV~̊(\&HOC\LȡO {"dTE )D)E|"nl8eF>#L-H{eyqGrG\%Pȝ$?57PvLw7O"MTFr,=j;Be`/3ggmu^3 |8Wz KWZN ˩,n.|u}UYgJ:loMЫ3mK^}ܙqMPn> گ Z׽ fkЇ,^cn,F5<%RyAZC2imǸuEH ^rE'dڪ({|q'bs*_qC;GhT[՝i focN?5&)#@d=%>r</s\gZ{ȑ_ew0cede&cM#&-dV?}Hb]}Y,:<ƇU'I`o{oHi.V+MUeYաM; laB[JqԨDzHPiTP蒍MTTwc؜csJєAFTᫎ¼@:c3*lN5e-#Kw)5+Ai6*2'i" JR+2NgNr4\[RGC$@PYHe A ѡ _$hn+B:EJ6*T!lq΍f BqϬ&(XD:6Rw ұTEV^œ*-ēHe$U<~}j[mT4.jk(%kK@" BU^" 1JYn9|6ߋOkZIߗ4tjgBMAi<Ůq!JQL]t=0v1<$Vj !>t;1rthwvG۴}gɜ'\Hz"Rls^wN|S P#}$\ܽVY&4G@N0P̦tzgrIaBt'A$DV.D =KHWrxWxuDݩHݖWwvє[%F!HJ*e BU`HsEjDb;ڒB`½4$aMt(ҊmTylQŸlƫ2<77/94(2L->M0cT:h[27 { $Ry3r=:3*F5LC\yM䶽%:bbnw}Ƙ`YZL?L?WXL{3z 9SK뤵g:DZ%!ޒ`#Eͪ.oYi"W4*8TNF=Lx6?~Hf.^WSK3fRdgr޻-kuN]']$\[-,?[n8|U7zm1ףme̊R_wzATܫbí܂(Zi* Am|>iҴUp$;@d[((qڬZ[o߀NG7b zC jzEEu }buCu8.mqؾoR"B">HtlJ#1u"'zʸ1Npx2'1"}UH.;jωV2v&e%=&̿ϟ)B~Z/0(@L9wf8"yɴ X#${BVJ{!МH-CiÆOA`" ju^ m R`vD]1_>HԺȑbMAa,p9pk5 >HQB zoW*euPKcc(x:!9ҹ;fxpy!eБ%sc&!V>syα#d4FJh{GJ&; qZ3wԁOk`zuh4ra)oQ7zk*2_]=znV.xۇ[\,B1[Q]7_'ŝ^CWm|o&lKi(LfŊ_on&c-ΊBW37fi2̯?siLFVGؒL2*>}6 w60m)%q!.L35iHJ%R;*g/ Ԝ&Rk]G5蚪pݿbti^icTQ[8/J2˵u>7>,rݚݞLpњvߣϩUV4hݽ )iM# Oߪ 13׼_K r2B k(>r'`έN};WPbD˃30j١X;F,.R,I&B`ۏ%rU#*\0vjZ7{Ns5 ƲCUʽ)%Ѳ9y)mS].}T9;1d*uZyf.2Dfn"%[ ; Ka~9䀇(k$C*nJ>ܝdԤ cY^Pn q?T=14yHоFtItlDtdԒ)ƱT#Bu??o )*TXeS8Zq[ş LP}=/g#1q\pAe!j\.rXh.qOue Q}['.{>SA$rRC>KhN=F޳$Rf%C!J)eJ-ZP%"dOCe+z%(Ioڻ#f@IHԷ- G7݉$֛݁4aJ!5(`NI!&Kt)ōoM8ͨ4a_?&7)َs=)ObB#1"y՗cx +4).\3R(RJa1ߣ L  ޼^yBhT4۳h9@mf| s\m1Wq[Uy[Wf~D9L :\ ,09.1Krΰ`0 $v> ѷlM)LQ|v>ӠeQOrN W~hz͛q>9G9f`Lp)Xz&!3c!,J!@RNQ`'90vJ[eSZjEy ;/EDhcl'XjaGPXVmBOKߘ0eHic5kRBD00Fj4X sv5Sh6`ǰf 7f|`oJ"5qOp=_d @ 9 kY |r8%"%bD0r -X:*0QX6_Yl\;k)1a&6xfT3Acrf`ˑ{ǸtA9 q1'963̽(( >Ԃ kŚG6e7W!VY 0Xx6-f# +WI@I%~("/)a 'sh펛BR&)$Q(WDX5#HDEP x+!,Ɩ{&tMϮT臜k̑QK` ipT1uKb1ՂN}T<0^g &8x/ ֺӚ|\#Pf.o@` *CL rV(!9xQ'^e,ZU"u%]G;4-1C WE**MݗDy"߀@Lc.gu(_۪puǻ>BD]HI.<~\`^54&=<Ř@-T|#?z |b~kyU̍/~L'ܮ~:룁eJ{7_K8Q} 0xx7F&=#[#MtF%Q)jFYTu]]]C[D?1<~6Yl2\_S; qsP%hZ6W+(=LI H55_:_x hXQa7 IZ6@A/. !Yy ŏ$@VD2Rp*<T夎͙d Rd:k;%.a|@4BhU\@7] 8z[JH :.,np8-Nʮ7d1D#91Eiq[3(F4j*dqC5S)܀q,c i2JGb\@[`xiY#9/y4LDU*4ID'=% 5Ѓc\XR`ȍ嘖.{^b)E1bg; Ɖ$(.e J!_VZ 1&y_I>Af"4GJQqw+ ҌgW~Cł$gTPNžuS VDF()D9˕N6MydiJfY1o` SJHFiٍJζ!9H`xH IN^ńȁ..-:Ks =idQ\F#|'4Х%llFR4OAd!nr)!E8L> !OaSiiSDHt`Pƒ1YZ*Cm)PP nY&v8(~Z+z{gZ (rgJ;'L Q7SDkU ܣ*p$v?X(>׳$xr^;tg,0ן\g8D#ϷW °}?^yt)::ù5yajHPDcy $ ZIoWqi'_MbC{Іx6<[Igol(魇˧3 A}PY8?$x^Id={!Sy )~8D-/zDDyza%(sHdv.ljeF+|q_zo%U PuZF7݊H;[9^%Tr@")`MTQR&WR&Rı/7^Dڤ"q0̏W'$|cyA *A*FnP}!! Jqn`%[N_V䥈6Ѡ'7*΀]`īKm3 C,HyЄ-:V̈*DtNmrd.'N#ɓQ'>4ˉH…-"jel\ʵuAu J ݸ!=K(%Y$II R8ғO76E$AKS `14@g(cfEl$Yтt v,~[@n~@d &̦Jn$ITǑ%ER!baι>E_,5 vH<^]-Ь鞃W ꇱN .(3Zw#'h;El{|{ü 6~B{oGfʙڈ4d;OۃEn_Cvj4W6gB?O7w1T|SkP=1*j#9${f _Sg0/\xw:!wJ+҂ȫDQJ2.*ĺ|dxPdԽ~qKUiBcE- K"fb2)UOfIoE>^UVPRPLI >߂H 36ۋbF|8'^g{RȐxtOiG3Y=qFm"'TPq5J{bi>kZ E@kQN(hsֈam ڇQw 27 -V?Zж^Hw6֚&vEߤK5NtR}c9~:R,EM4~'OduGdݴ`]rtlXgqJ*8DRk_uhSDsɲ)V\6"׀֤!/p"ۧk^c|ȿIQ'⇖gN.;iL/ۂp+?M ڦu7vI[ l5> e]u0*mP& P|}* ifjolvҳ` #:g&gw!dRrQNU0YQ}I jF|uCh_Cɔnaؕb*ڢO^~ϩR;ܗL%E2T\ѵkKen%ך7eW3w |cZ "\s:sxj OdrjFY|O֡wE߫>;~v]xJSp:#7'WN'S;7v?<\_SeTuI nm%שC;Jtr"oLn3?ꦓl3I;[n*^E't N`]\s 爓YƵ'2RIPFelm”Si_ph+Ϯ-[t.R sz`kgn]%.!"\Q4 p{Pqgygy Id& xE* $Sx~fA&R[Q9ۖAKƚxz*.ަ* #FٻYۧbzHY*f QM.! HjFӥ`vv;~ЈlDa|%`s9#cK4e:r$G`LK-+D*8i `=,U/_Ĥ;jDm>Յ>@ϣ ^RI:&.&*&&HYLSu4qb4q^ڰ!_svfv}1$_?2wL_]mk(Cl&>Hk!I>%w,ɹ3=V"z~Peĩ*(NXt5 mu*O#d%^]2"qJx!2"/073\3hVAϞOȳ픆r'A 1gx:_MB0WT2{IظD+6!tHqk6yY\mul4K-yHᯰqC:z^;o;$'4CPA1T|Vfqy; g/|%;Fy>E-vm?MfLfV~ђ-֤TߑTZ#QQ*(M_tIRe./mRm `!!W`ԃ}NkDEVӀZ(^3+&4.l>;7$?$?a{q:探A\3K6?(|KY)*y6e mdw6 8zRO-AcE޺c :;LhYtnvt}up/цq^x;Q˿K gM 3myִzۣA]9Xxٛ$!m _52E2 "5-G=hko*o^Wx|I3ٻ1s*)`|^g8T m\˶FfHʠ'Ti&^{x!ԫ͍hkE%[+JQRmI2]uك&k.5`ZWXs)c515dt٥ ಻\Q*a M_#DΚ\ ]Jŵ(!.p A/?{Ƒ`3:}Г 'r=}+Q I9.j^D3 "#LUWWuWW%˖iAe A\O +BQ ocL {M :|s(k)ot_2YNsp<; Vӛa uT ZV&@Y6.PJ%:)-|5wL 윩˜%)= ҇(2@RZs$j4:YuudFk<4M+; OncyZ 8|@PWE*I_%cC`~xD0T[1R"R<4RPeaP{p5^R") .[ZC ַ< 7TSlQ%hoN _ %|x4I+-yt3:=f$7)4lgUC%3l=%e [?j~xl||gbn.?#2|ȀiEm8gydY~KcoL9!!ɐ @(rABjU wwzhցn1\T9r@C<t K2fb2),y) Jt}A! Bd`rrr\ .5ꐦ GW2Un1xk-yj9dB*uBJLpJ_=lv*Kmv*Kmb۩2mT2M|")aO_e*.;Qۡ}`y+"w)W4 Tg#F7>\Z9w A# _ *t)$Q t_Ti`# ` r_roM(,S uc:Rnh$TR`CQ91+"7DhKE&"ë@DY ZmQFFe=f{̒%1+zO7j: i ZlNkJȳ)~1Oz5ںR:Zclfϱy'Zq U5F# \5N7 ]U.oHkp55/<ݷ/6 b-MfѴ󤚚*u-b8O$c}8 ,8EF?OF[^}ۢ0t?|\}h?~޿[}bhwg#oLgK?]]Ɨf?g)~Ng_߹YBx'Ɇ300`>#L bc雮P\sORqVav U+A&*>/WRBъ.R) _Cv]`q&^ۻjRqmپ1c{USZ: ol^QRNqNpdJh%bd+pt9dA(9É YO,dyP,-c C ld}iƴI50$Xr\F)6<P%:%]C~Ő&-S^׏ Nq[IuLt((htDE(Mi#۪(u*R!5Ѯ Մl{H*1K}ssMw:yK)=i2 ܗbwǻz;Y-WY;`>ה=w(䬬d=YpSڲGY6n~L;T/w9la;umi75=Le}IeG~_nξC ܖapx$= <;@?<?>]]Ƌ70lqFCc[L:>,M:ތ7~>L _G.|;~嚶`6s^'KzY6o&ЍTs2׋4(ӼWrEm$py甭3)a#Yk.휷52tMMu(>u$gRtRsf5ͤy6 .6ݯմd1Jla~?*"K"ϙ&ҘîE,6/G ,((6Jਐ6RB $UԆ* wКC#sԧFY-H͉Va Fi\p\xuPA{[gZ+ )xԔK僧ZG ИyR[PB.> {DݩJ*~Oj'~_lY Rha"*D FSF+"g]C81u6zB}vTg&KT|(cVPqa1aȰ u&uF1k*n/p) ::͸yU=VRK6Knm_וXx!Nkkp.P24\ghjV,J =nyقlP07ף:R1IkXB末ʪBq=RЇ]s =roE* h[ʋm k-Yer7ξf2KV*7-K]<7 W7@[o$_)d^"ݢ}/׵/{P 3䒰B(kr0{l%0 s0QYPIeҙezl,3'J]\6l!FSeUe xGI/UܔeQhfY%%m3k𳑦+:'/Qu ʼn`=c>ْDkt HF:!|N#V\ кW.Ӑaݦg"t@)D 5VZIhr?oTؼxT2- ݰhʭ%\| g*gOo5mf6#7 2]R8pjL5X%"P~stt9N `b=J)O[nt loߗ^}6$}_CE!p_T,gz4KvHCuP`z58DebH{H{C {'.` O4~i-<MI+Ⱦ_0fCPN&BI}NFQτŦ st&&n1y&K̞Ewlr7G{Tk -W=\S]s9RNבR+r>I.ɮ^mՐ(hW•z6CAGvh^{v;br J(J2pג+ٲzQE7Tq)ZBm 6|۵*5:,+Uyf8!"2BϧBN;-G4׈Z`)🥲@oR50Pj=¾.Ȇg)>h:2 }tI6 U uNmv<<nw=CkuHm<,t"0Q2dE1yJ&{z /QżXKHh5duçBw"=I?O1,jb/Ҋ7L'>IGT sV׼zT be|`*(ۏ:nO?U1Q#Z.pȦC*\6 З0EC*0'sm:|N޷U@ !yo:ʳ0˖[buؔٻE5d@7ksj\IKJO-Efu!p:}m<657Ziem"ڮʈT5< Kx} ;1$m0q 6!,uxsRԼ?[dC:g@Q.(UyF:wW`( nz,)Wʳa{3 ~Yw9n<_-A;#Tyɜ `dR x\H Ɣ؆͏:$lK`5wp K%CK8%UkN7EJ)H3񭫛i%^ܲ0zUKr=fOū/UJ3zO7vA}XR&@itŭ58gDߎӿhogG_a8 v ԪgI_SP*і81 zS絹 ޭ+M%|{%ӓҙ7rXue<bG_afV_+L_]L@FW߫ JsWhk"7% H/>}xo0R$6*n4Me2ywd?#2|mVfCgyd_{E@K#o5\:fAG7I#rΑ\Kn%c.F򜁌3t@4]n'qM v5Βg"ϕPBߦO Qi=Xncyn3D0dZ Y62Xƙ^BВ(࣍*2b >F}(f)7 U˔Ipca:;v߇MLmRV?_˫ZTQLrW__H9;E]mo#9r+| C$/9Apܗs;`{lܭ~4ص"bɟˇw ]1Q(џ_ݻY/ ~x}jR@@k&˷g?sWJ[a٭[u~7_O]!A\K;+$I ?>XŚ+8F7\I(+fkpƔ4̞,n`~Aks@olvkdy%%ESB'ΚȠVI^1&I,˼!DJQ3Q I3#ZZz:Mr+,(#:3';BeD4 -GdJ!(4L2{,Q_ʞs {J|$u EK'^#l(HCvugt%Ih<$7Li UI+zWHbET)j71S!z2cGҤ>fhO.re V#d>1$ M,RR1"+@ {("Y71Wޜ~)sM=6[͂d:`5Z`79d,Aӂm% vRtv)dI5ٸ/3 j=]mD=i[<<#,S?'#K1SkZܳhP4 d RG)B߁[-7Ơr^F {LhEjS0E?#u8d䩼j99a={ieq_¾zc?txf u|ciK'T cTX~W܌@+"DRDº$}Rv:9d8Nj Kޕ$cZOk߳_Ͼ_,hf4#+>;ٟ 3HV;6g#-OMM|u3ޓxQ"?T:KRE6J)B/TR`N_k];h3` vJ|ֶQ+q2R[J?I„iyEZ7 ~+4UYZ_Sv _<܍xmܳp}i D ;U9:ZtG*3ݥoͰ%xoVl u-5Y أ>݁ZֈWotG[]#"[23fKrUq6ȷH9oSx.vߪuk@yto/Ff W,`K\j䜫o<m.6Wp:Z{Duۓ`+-fǍMx rsLsNXc!pA[ IzZnh,r6G)>* p#Fo1$g.$h-|rQ)c eNC ALZm l>h㴾RܣR#"F\&#SNBO1;kY^m0{ߨf;fU mt \^1]r/>t 㖙񓯀cnySV%JNu-pcF^He)ϞWKag-p|iPAJ:N TbyX0P,/5EH{i c9iQp#YR.2{" ̉9rC]+HΝ33xN I%|= Z̉A͆+.Pr# zY^$ v%<3URf8~s]E֍JK1v?^)u^y}`]IJQ["GP!Ea<Q(|$YyZZv{‡f||-jg.{>uf4v]PT]/Ῥ^ TJ&L(Y+F;0,kA^zudq 07+VySBOS}Neۃh+4T1\[hn+d/ܬÛgqxYz!2]׷s s3V-Z!  Mv@ܭ>ԃ{=ϯZ)]1&[ d3*sToh?S / 4còdH,<[1ik~u]Px /xTL^sZ`2:k_M=Q k$zh88`@}3Ϟ1O BOxn|j 2p9E?Z7ܧn!'6#49DQ,TK WQPe3U# '[ 㧏O?Z]z ym㧥M&{71a'5uσKǗ$bΌOAv<69v柞y1lB>.p;Ͼܺs||Lhke;m|cm|mɍ;*:0ZT`1IR& a-r#C@v٨/ؠb2Ak<8 >z%ҠR&cE~ͧS[Sl'֪^3k ʜvIDcd(˱Hdb$0RB{`f{v7֋pwt:dкu:Ph>py-8 vsdQ|[JE;5/Ԯ)Ċ.Cy56tEpL UxRK^ WeU9R ): !Ǵ|]}xW({Xw՗ҪĮV___svEʔ:卿wxV7׷$Fr>>wgqnge7HKa?$EKi%'fnm_j8oWjC2ݪ2)9֮k%.-KC10dֆR iKpar9^wh?D džAP#BacH(yd'YIE|ʇw}(#AA,1+gE8 )< `dv&+oUERڈ~}Q@t|y#[9-F-bB 5, H&i4 0mTO!Bzo˺e7!1`Pb; O9F6 DR:'\Oe4Q MT@r"E- ?U":)HxȊbg Aɞh )wa iZ7/_׿dSY?P^`lc^2Fb&H6=mnslxU*T])y| ޛ*v~ ׯmVnw:G@oݺ6tj6IPc\BM*imCVko!]0`SÓtjϞX96{zu,ۙyFuQh!48*u"s@SJK{4fqml |eТ GVǮjެ^޶1tצ1Ve8ę28޶|kaj 骚j]yLPU;pT*Yq56au%v!(a˩M-$[kkn8ŗsxŹ_TGOr*jlP?= X܈Y lSb랞r NfX)]̑BaSӞdr t-aoR&h嚼Uv2p =;|VWz{:{{FH:=d"k!'ǥj3J[[Lag+5N=l+al?T7߭IBj2݈XN&8T)m?b|qڹq5h]{f;qw7z`6ӴW7~lA:r@Oogs:;[~3 N70)? 't1l$'ɳ鱬M_&x8p+Ny,~/ܛY̔0rVnB^6)LUFD[S rL3xγ+*znͷ4gz6,䅛h#c5yoa$FSɽDHJҡ/VDq&Vl+$ַov%jpҹO\ϖFO ri>Yb۪pl5bb§ph4?xId2 wE_WVA^D :5⯫BTzT,w t 1m^Ã_iDpq'*heuUD5B8>&4RF} h"=ܣU]P6b}EZM< ,eHls:P4NbM rhƨ@ьQ{.: t47/P5x6ELi+l{W}坑MSt.\4M1Ȫ0]z`omJzuޒ)$Z^o{+X {$㸷`wg|0 |lKsG:Ɋs2Ǚ ޥbTh|!2\'/x̸q✻ljqf?Z'od}^ hc]i{n;O_0D\( .6aNx}zznVכrYW SmGs N5zǽ^e:N!j:5x#9_w: S-D1fN{,g 3]9zx k2/>m6EHDd'rKɇ =N9$N⤉h[1izwZ'uJht8&;F:DrE*e~le'ysLb z˟@_8HjLW3/I,y%Np'qò-w.˚v5'$HǤr;%HoHj,j+%"BH"4-xP8굶e08N*悻i0K_d_ۭ|9 &X\PCI̹/^;XKL]TB31aIg ƅ*pYD:[QiH<%yi(YL`âBY_AThs1m0,@tk5W$O>N1ܸY{k/JbD /W0IAIrJi(VoXD|JQ1XOK&'_]:y+tle]w:YUT?㴘-`DpXBaylE:Z8}; aZzP'"s)_+Fyp)gܝ%o͹ x3is\T%dgc$:B_=M0J%Ao=gPKYfp._tu/A^Nq;Nr\Pªy5m64o?=|+'%JNV+Mf:G ;g.-P4;ㅂ^9׊c^9{%F7Dzx][tڷ8.8λ|1P5u?_'c oT٢Q-;*r>Ucbk;OΓ#δCg-:1IJgW*|YhoFdc i`Oi ҩ8N>Q+y/雑W UƬq3m-eA<1md $r 1-4"&m^[ slaF c@L417!tzJ'P($,,6MHpfR=Q:P>kny 吔oHfQb,)-F BbGfB n-(%sMtH0۵$2Ƃ}{͸krODӘy5btm&!֞Ȉj1#|`JIzdJZnUe+m+#LLݍn>],|lNj׫,vV LdMX:܏¤(2L5KTHXѓeԂT;籀WfVkt0)uVT9kIђpmbSB&ywӬr|ޭ)9u#E{n_4׻a!/D)@-ꈩ&KJLSsfe6≯öƇ2+$i-jwD b$z ΂HtJp/1:FZ-ie琌y!"m %)xec +-֊4 )u 9ڂ#.p"HK!e{󳙺`1>d9n^,v}i`#lߺT8b8+7؍Ȱ`Yt B\%LDG~R7;_wR+RhSϼ9EiyAꂖ{=b5xQz7xQNj- /7c3Կ1հ,Xb`4)? «-.QidoΈNj:#\`esYVwa(Q + #00[! SΚFNp0ϬMk[mdmc:N +M"SRZՄ1Wڧ o!Љ{(x}kMSL Jo'V^F.ir"D4KØ:5 j00q:T)X* xldSˆFzE\ (c E1>  *LC*N 9a!;\ (ra TnMkw2X)5WFR !rZCyCUe>fep)s~Mj(4kfgkv\{~?̖k.,U/?~!w_T.x_cxB{Qӯy? d:7e -.?^߹1wX&;9N듀d YŷWWI) > 72F ^>?}fmIZ~SDv7flܕ ^54\-B=?E-(t@3%UqL=dT/q>ZCNr@=_xZd6 jn!ymÔ싪. , (`ġZ|7|/-Aid`bT:E3ÍNsV$S6ɠ 2r(j$ⷁȅ4$`R &b4c8!Zq`f*k I'H@ RE!9+Ek"xDB"$Sd<{-=O@ÿFGJs'(3ޢp'ˇXsPU_jcgZ1Þh@u)sX&j(79UmP6hcMP'5O@lH^ n ޤcy!ůz_}(gF6 I/./}(|Q9xEM47?dZQb%TL=cHYP]9 *bh<đ U!s,y{CT~,hd'F٣B:p:P?]@V%k>` E!D䒌tˏޖu\S)hT)1؃wU ɝ$3׊c]1Q) )H2hTBG[kB IB{ PFe4ZkGgseʳSRdSU-x`JZ'pSV  k=r+a+$2cѣ}i OR5&o?ox\ ʟ]<.WΗ=./nH7F]O@8‹UWKg͓g}wt>~v>ok.f%;?ІĂ6Vcaj6Yp D[I0-Jh&ږo^| [xs/TbJUO5VB%w;fʬM iݦVҒ>p2F0ՔP&)R8=f0m@YҒӁ5Pcër`$JRV=r-G/sbo"Y:)+4TcGKNXN-{X(Ri.鎌yB! CdwePwSyc!W[=T{EkPHVO6^{W`4AeRF@st*uYh,WHB$x\*pT'C*dʱ7][ r,xN3Ƶg*}~r+ 5oHL8jwvb"`vS$FaRhp:<λ.eT]3Bn8IE!m~#5@Зc*a[蜣yIJ|_؎j͍&\z!Ò E(DCiJeK֯mH%؁ l$P˛ 7N/zo?- ymK6lZYp hJ$VjeG^(/: }٦tnd6L^JE0 h/ԥC@CIC__gs>O)'K-O(]|l.5ˆG1,Ղq|BhԾD`%%B'̃j"ekAQI1X%3(ݲ% 6BYP#\j1rS*evXnDxNT02"My TcFXCE ,ʐuDFYrs:1tS[ nV4RKpr"u#ueX#9 H̋HF:0kl7 >$cAvƋ,ZBtR;,  UwD@H*$nQN]x8@b7%ze֚]XR}dA8PZrRmwk09)PǥKx{oXb_dat|@ {|.X Prp< lLj4Š7.mq~wEc18>.jޥ3"eBXK[;zi zVw ɡ|fu]bꮡQe#Ӏܟ"PdQdAVbLɁ?gÅ𞩏6vN1aup>Q|H+x!5 {l+**a2?(x?fgkWgv^gAu;/b  ~Ł(ZJb4> K%?SK pSPJuGf3`&Ma:#Q!V$Se4B~3ƃNw3g}dzy4-vH0YFn1 ¸,yRjA3̅I ZZJ0BieꅺIJ$rp1^$iQG|x!fDgk.g籾E@5 m5`UyDі\{3 % am?@1wKu>f+!`a, CϘ %/{yQwcY7{?›.6.-$>|> G(Q)#~X)$~W-(4Q)liPIlkH2xRcs|ؑTF0%%yVH1Hu'+@L -epICk&/lDR']l$Q)E]Iލ#%bعOPFGT&O><5 ѳ8W+o 8n~p@ \jhQ -e'g+[qpb $kTS 0Hݸ-/k!o`}bp}&e`iQhc!cEiawν>)Hm*8OS>O3вǙA懮FH{x^_}6f98}Lg[Q{9ٻLGNdn= |dj! J& J YtE]G?.څ8)C/gGp#/E3/Bq]W{P/=epS1\NWgM iI Y+q,}/ wvCk N;hCPKԶZ!Ӗ!~YK`D=iYwa&v00E#=`Tݥ}ޏ$НnJ<ڻKdY#`ƱζX4Cv@h#DÐ@Yɹ~R#q/Ĉ!§$6j1hlFPC&WHHx;>x֦d0eANn-f)\q> 0[ k e~z ,de:! (dIк5HC$A Q15skA wsTׁBt7okb)eIeIj)Xg2ű ,VAZ-G90 !hd_kO gpD!PfaG~sllAy.M@ƈcI^QbiokAO@hcԐcG@n:se79lL|0w_>^`7D;R#J;?yYcs/sm8櫣< 8/οzwUV r* qݞ7CM]%2r^GRRE8~\H񣲳eVsАBʩcF?m If!.*e=n$XϺ.$Ws^Ћ~j1Ibi3giOx~S,JrJ#RHפ 6iWTpgF,3F`O?r/ɛ|؆.'>Eb⩐Z[edyc 6]L&FL,>\ F+zpH&.1-[w{i[У;p ^-\.[J]y)֢p>݄C23^>䷵Xt&i*NSjg~HN;nAfo:.Yoͪ<-]?WosaKr3WtѰDw]Y6^y.]n[d%.;ee=Ƴ`U.V׎|&eS(zovs1nNlufNmvnmXWnm S$HdaTsL oaGA F-^/G'lyq) tqpMY&xU{oOF*Ad-ԾPBW|8MA4/ګF9`ܝh@}j3$ *X75vvdv^Zwx&G|ėY]]|Tizn¯{!oXb8P;&:6:O1-w>,vPCFY1UQU^#gBB>\qᗒIR(KV<Ҁ8} d 'm<+PC(MΙBMAڨjPRJ#N}H TZ TBuizĝ @2ԣ՝)ǯ ਧ4/NQ$%iGxy)X_̺kVu+#wӸ!OEڙ_lzX~IAwL Ǔ=k= 'w4yz` ~.0&p;l;_HVVs?%X …Slj!PpNm J`s,V6%D %4P*N.`%@4*aD EZъ((xfE$\\Qp mt6:']o5G^.p!Э|z|̛{z9w O>A*ky\JǴ>͕b@1ߠ7>?+<_A?κf3=nlM<>N?\Oa;2[X~^2ٜyoO !8=[>8p [V[ %SܒIU/0uʌ6:O߅l%tӯ&e~ nH-5Dih  3Mr7EM>k'iɬ{P GIa T8 J6,1dg4 މ./.tRHMX%q,0@13bV %@"SdX6͑~M40 =爑BB,1x+nXէEO'=@rnNE]lfM'-Z/7 *w!"nGRr(¸Whp')K΋P[t@bE7{8uIqSBkhЇ cX=ut@ 0_tp`ZRθO/wm=J4e/X9_8>, }Jc̶ݓ[ݶlK6%|ҹeUX,껊_X;\ϭڻDwD2#OALA1(2qEߵXV 9/X[2ĕeUnOf__4c׊ӛw̗k6w9UT'reߖa]ĆC 3[ 2M?';sL90AJ0 WbxqO2>ʋ¸<#Es~q97oXY-B6ޱfSL\DJ6 vmbeIGQ]O$(] &S _ eeStUMK!49ehJdq]p)]"t}J5" w} R -ZNc^)N-J1Ubљ_LW}-\~Qҙn4^j5N8Zr{H>ݤ|ɈcԈa_K4:lt<aVq[MD,5!;Q H2#gݳ!p18>L\;M`q"YS5]*/Ss'PD~&( , BiA 1;$I*᠄%qrĔj¹ZbAfT1;Tk\{kgR+A _3DNj4KdAF$K'7$u>nIR11gtn_Z7-ym h+1Qwb*@ǢIL-VhWedr`rcSNZET{ǜ!K/!&3#`%xY 0 p{kݏ=;Cswi7yWFg"¦$X,T!J`_>qCCSmO1:*9Y,;S5y5U[oUo}8TӬ^{HAՂiIۈxmm FeK"&ooAB +"}YC8yG3ugپ6 >K2~gܾd/OYfN7 0EG'>6Z?ju_ ߾~O?(+v}rL}ͩ5SݬF̯vw2plVm+i"xk=(CI {we\x7EhE/Wٜȅ{Z1}G"!Fk"kAxO^E 14'&{CSQ;SZk2 AUdp5إd.lǘ]+so]@0j*Y+rC'sZH1xí/6>30-}4{X~;ѽu+[St]NJdf (VZ 2rVR%A[u(S&\ L"&U6*-9Rs_nr W{J8W2F,34HHqǞ .()DR*Έͻ'Dr8C_(a•`HRSxJ34#-J櫂lJKJq#X@AK|` k  I)l{ܡ)l!_>}1Odw0C,X$bH#btի"r8}sЖcdİ Z GykF{z j/]~FR iD6w֛. W0$ 73C< y-Ϙ3<#=ZJERj*kıEvbʆiVPQ6# 'v#:[ZАm>\c;1G]~x%swӖ@j&k_7JN Ⱥ;dCDW|,%ޣ2Y+"̜~el5wxNHQErkGtتGWۃ/)i;a5$L]YiήD#_d`Fz|=>_.Zܣ]~wFۉ~ gVs1*6qSHU|e[P\;"6Yi|MHYE[jVKlo{39LQ./ \,Ur~Ǯitrn~<\7Ka3欈tD&= p19#LōNZo6vc[1JZwҨ@#&Vkԡ]ԣs-Ѫm:e mxI9̦~ڪ f@=OMta|GתKG?3iadbfsdp72f>uuAwk&w- Up5N[M8)l @adnVL<6jUAVp`Ή*gt6t!T9Hx.MP1D5uʎCgld|8yTKdA*B1FgI0v,DtdjYsn]í@ J|(`]owceCQtr~w0J!`"M(0'(n#X[f-J eNY%! 9*SsjD%g=#:3,#x猶iEŦDk•JK5Q$Bշ%P2CJԼvD;6L԰YjMNEEDSZ &@HmJQq|xnW~AX~Z婷/~ڜ߅ǻg[Ր>| zvذ%9Įjw/.Ab^kknFEsRڸ_\凭Jdg*d_5E-9Ije"(^DڜLef,h|_nݩLBLD! #2A.uD+YȡTiV`C ?Ea\Aj!kzڟ-#*y:H_?\w9nUw_)Ga) d0'ӳlzoDhZf9t1B8VP@8) " O 'ϜV99l'5z*<%Y#CG'?[a=v^^RN*`{T7^>sQ'F.EHFSBlD.s̋\%Ե>j]A a,>.6QRCH`*%])+W?^j (?+?ex!uك3(PX*g# u) aNծ4煙s H,ӝ^T7kɨٙ t:OA {F`H $$?#]x wi Zc:uN@}d?Mϯ/Y6ökX j#N1ũa4i cخ.GqeQJ-*ѽKe/$ =.S7]ůy a6/%C[B,qh+@9 j`H98Anx\Rځ4P$d4 gJ n }%ShS|/5@fTZ m ڍ8FR˝MwlJE7r.ۅvw:_Qb 4QL>o{ihN]Ku/.bJ(h |eE+a{C qC=18W@nUj^ծNZ=TzEeѳ/r9>:QۦO8rUL\V)ξIw^~(7Od.ONiHݠR/S f5SnIˎb%\2m Z^ݱ^^*>=&8i=9ArayAS*Ƥp ,b]/L"!u}8x}| Gۙ2' 5VٰZsgRޭ[#>s5~xۋL.KZƣs!C Hq{MBA 6kQPP~ad`Sm]׏宅RX%ڊUiձJaW[FǛܦ#ds)P*H*4F(rG[@#TlDe̅!l,2W = g.QoK1 ֮[*KGC޸Vѩtn^nR٭*>S2 q: ٭rG}g:4䍫:nr~bwM3c[ #XC,R~B,G\VmdRw|B[WR%0A9E/JdCłGb⦸,H(Ւ]8X1ODHZԋJ₡6E#Om.eOTzŕFz '9- !-X2H&z"T%¿Dqu :>m9Y+a8ļ3|ޞM0GA <}g8E7e.` U;U}GUk# I11 .~:,iVx /5x>#I50?\4Z{@$HcAzŭd>Ч%]p/sg(|zMgw]gV\58!@j F"B4DGHp)c9]܏U˅.VmWDo<<=n{[֗cznBc):>.a,jS$36 ER 8e"a Y,@<(ƚJ^RhQ{V`UŒ|+7E]2qQ1 #;M$u`.!!TS)P$$TH$4?PX;VQU JX Ø}qD]#!SX +ǘ9Qv|9%^tZ7G_l: 81$?|xl0OY~sfDOj`%FT!Q%iqM"HƚQ J$zȮZCAL)3[*؎hCŀfN(sfy *QbYR8vDNDvRA*mh3#Cnudi8^1 LK"0H($(tXrN%ƍaHX{Y|oX 3{or! _g8˽ Y"Өj6'1$U|> ?@ _vC0JHL)g:_Nxv 8ׂH2djb&T$ :gyzrLOej<#wAw匮O Wٕ>W\D,do>elExubh`~ȌgŊOՙwUTLW`4着Yy x3o=z[SNN[>j<˗sZ l]2xvX} =/cD,;+i!S{RaqD邸jqYD[Yolqx) O.|b^e^)_w^qGV63.42/eԫ]fN op;WFˎ}HuAza ?0@+8[o)vP 0urqVA{ b9;>]7*^Xw}cE@z6PәC]'\qLc3Vuppׅ5DHb֤Ifƈd[`,2ReZ8,=j&9YE*c yIdMi"nst9ws>kR3Rnkp3_1ѓsq(:?mPt:\ ۻjY*d^{U%  p-{PpgOcU&4< ;r|?`ɄpU@Ф05QuO(2}?OU"cXkpR@( !R9m- c~>dY Ze: <=,?~쿾(u0%.*l/8ݔYJV1iY `cHDa9m~r > ¬J^f(_(}X8F-gP֋3 MqGC d2&4qqm)6KuHRޫ4.fXOf( 'k!c =xLbvM#`vBOfV J0F`Wއ4 ,}>ݩ5" H~MB@YDa 0`^n)iMScQ Oz7^'^$XbPCc2I2^ i|9N?ή'َzMEBϖm~|jZ7oG?L&/N7}>pt;,oG7|? b@ 16~T.['Zgk)آ<T)ATuc˦u2W֟\dz!(VeU.eON`ܠR+BDKo8=8av0d1E[K.((LI+rb'O0&8 ksҶ\pI0$l+ߘ9P ?o +67?m޹qqeN9b&G2}+͞L"5ӧp.jgښ۸_aeO*d;]qy `$ROc)!1r, FŒ_ `Ydybr~JWՌy卿ʃv||>ܔ0 "N# S@P923 0YDek DHbTRoRJR."@.XtN1=0JZՒwgZڒ 10'9mA¬]i6LTl- Ym~w6)u<˺ǤLn9զL͍1!ⲽ)6~[6X 7q3H ȯY,x2NAMx =1~0bo2He(Rv]"e"eA K)*P,X?Aͦe2% R:;0`/-ng֗,ۢsA׃K1J*U͎,g-pjO<46.Ei9AGQG̪BY㰺ft6YGw+| p=bj 8?8Dԋ_>+ QqX|)>(q?B4YC\1s]daZL~Kt)9~.Ei1u1}+*u~p1t} 3hc7L׃1u7k ujaAs7#h8@S\\3tIekp n6~W%L[qOq)!/ExfGMZ>["Q8Nmo~MY-9mvtkC^8&XQr[*1:.m7&3GJ68䅳hOծ"i]i>閊A 뤎t堦صtK/N)΢I<ѱն.J=.^-ɑtr$]O40:9NNvJ[¬d vc5SH"a֪Vd̜6v،9Ad'6w'tpa'hN  |NvB; DIFOvNhs'() !NvNhq'O(t';d'8Gg'P,N8 , ::;ܐ';d'$yJpzH9fPyMq)D/.sK]=LdJ, ;!1sLLꐇ#;Z++3%IQ0"ЋOA \4ᄪ"XȎxA؄~2Tua_M=N[b~yҔb]$4ezAf?-_y XM%8OA b&]l.=G_tPʊ\WBMI3t$ /k.Sen7ln+ߏ4kU5X@J eI.R忏Uy6MV}4xg,:a<*qm^yuꕶ8ܩ9a#‰^+7%/LICua=֤fX} P֬~(Wu6,|Zp' ckO05 jA9r1)QD<`W#&z54!n?]BV,TP(Ά^`o3dzn+vCÒ^~zw RΈ+TG%o5˞^W| "كe߆>z8(Fb0oKbYgy/MFLp|;@0 s+>({[8 !݊*$]l0mTpapD?żokyHZ9D.C28D.CZ9G$Eq RK8f={%}͙9os-MN!yRDZh:xcBxYy%p{n:t];HeÁ\Y\_ _}6W}UJcYx|FBŗQ^h?T赨+eWV*pz3N@_KPdDA9s!szJ@jyĖQ]IH¡x+eh2%H{`јTT:G9q-rz'ba&@Wy'H;Q6*0X rzIJ`{ qXDžQqA}x6,e*ZYJɈ&c8Oa{dkĘmVs ` (V+9 ֠@`fdi dpF\xK=̯q1PH1ŵABBCkP WNj|aW)[;P[%p2S1S2I$:b${s푥[Ra`S`^_h P ,A?;y QѢ^[XE+&^!~AJaj/ BLPD{sƨxVs`=j'%8oGrUV>cim_(BX90 ]>f8$*nJ+FsYzAk X@60`% 5[\ѝr7s apJG\n$ fD,&T\`kAGؑHl]*K-A9)r'.dQ42 r+PG V*4fGX}2pkBN phbƄq,XZ d ,#@[ Eo5i"a7>DurKF><wcd[Ƹ;A伖mȫ/MgY+1Ǎh \~ffe9]w ,?|yEGt\:Pkd} fp6x?M_oaݠ臑B ͻ&ag+7EQi1QhUk&3/yuУByA`3}ªa7>b|oa8dppuծVEQLAe;= y!l]TQ O$cT+hՓ{nݣ0}^ω^}6jmB6Dj[fGO>XeZ vlulkl\N@s+'z @ "B{ xȣ$ɖ1YH&Pt?!Nb^:y*в?q%xގB{ 9WHij [ م}wI7$aTC&BH|wEYj$f˭%1iÊןLvNnwٺy=\hպ3s4@QM0dDRHoClwCL5!y?[a/pۭCZozz;. h>.!HN!>НB^=ƝC7Scn0yr|gu9.!R]j|2?mV{'ioÆEr ,vLlSOg":?j8e'"RV=2Vϰexcahu}3aڙo>{b!zoĂ1r=Fhb.v=]zL{jmBQhm*}RqQ[)Un`6yH[;GNlr8~5:tNqu'UJg7I=ʻ M{yг#n?A(Q' 5M>\UI0Ωکĸ=V⨚ PR A<EX0C$%ugPN;E*Z3`p Xh<^-^67=](F w1u!f}bxvFիuޗ뺠(fJEX|p4IshXkn~jVБ_-&yjUQtMQ1K-I\$=YTŔ0ℑ+.98^+ةބ8,-a |!AcM a( \ vT qVN~v$8 .܋0̰M "\.RszVbkOS*d`]i;qe-=a\Nrl~*O7aNwvg ٲV.ӻo\3.8J~훻y&?L.^PJ/(7ADHv8E_^AK+mmCgQfon.-/~_ c3_|c菞1 AM`0bx8.1v26X #m/f_t N5XbٜVu'9|75KW)R)~)lK42h-PO|{<*M2RP:(fӛ>I10qf|?`9?c w~v|`}L|4Mu u7 s7#0狃耦 E*7+tq;i]ruM w H^T+-q g$"G :閊A 뤎tۣ4U7;nmp gҊc:%&vf9~^Jh67?{ȍLd/ C ;A7\^&0XED8 /Y*eJfuS[A#ݶJ#sYd|M{ԃJӁ*X˄A%@ ZaPZ#40Mb#mn *)1$AI*JQjӔ@x{wt"lWwuSN\oR L1b,QhO>m{-V}"3苺zz{a]qP"vj[^Yz~ YsBT m+2T 94kϬLVkeۭ-6DL hf2K|NqА]#ySwmYA!gLaQ=5~%H{~ylqa)>oMX(% 8㬉 ĄݰKž!p7S2"5oB_ QHs1M$兙z!ַ?%ծLg՗E6M4UE8[NJM#ʁؐX) #UHԄ)Ō8tFԕ w8 PU3HjSWJ(*eN/D#}oMu.N{8B9N* f˜!H?ؾ$6H+` We!G|RdSR3Y?09{14釛l ng Ap߆ƀ`  OCadhX ogІ*՘duk(D\ڞ16=u &$wL:^ki|G8X61|1X Nbv$}r>^J;D;1@ yNlOW]mvx4!IDQQvRiD /UFi6 FlgY9ݡKQ؋w ±[D~I`@tx}T/KP qĺCF/K' p?sWgU.C 446apjFOPif4%P% 14 Q *F&%Ja$aH#L~C}Z\őU䀢tr||ƧmBZͯ][etlIZ ̅#ejZ|R:m7~bYͲk p8ÅU*z:;[,ݿ҃؊ȩb4Uwdyw=92"$b{ѱB%Vҋ(;OzN:)r|N RYz'Nse~6M>)|pFAbZt5ySz!\!CNj />t7D)-"]k!b mr5*7;ӺYK2z,5^;Ѩ{͂i2L&[Js<=- ~m?{~ylIk#ҝ+6H>l4}ľy$Z_; L`8 =mBƥŃVjV׻JpGk0 Dz0IJ5R : qS9\R` +&|-@Bpb!^!# _O?##R\4"ծ FK07z<0|yO]٧"uRT8=u(:)b4) a gN,>^qR(NKV&p+I]TGl aHIC΅ 0K(UةE$bF8T4e)x$}Z/ Hka[.^+nOH9xy|)O Ï焩vP9=f—5NUxvX ]/F*1,<3R>cb0!jM(kyT MZsVZbAJ_5.T-W%Ab!E!(Dlwa5k- k90.D3|QCn™l5P:bQvr5lL%37{gVh*1UͮMK샫3/P:VeS,OM")|5I'I:/Yjp¦PeGZp$IJLRB![P!g#|}x*ZPtBP5zԢb!h`f|C(`t0/;?p™kzd=GN՗NzB\xf#ȓD7nYT*z֖-B3`$OG[4]+&T선jP(}ҤƐ Yx$]E"NR|I&e/ّI_:e$E4zO<܃`N#̢&3 ׇnWDb5{e{S*cZZW%֯tbCkPmi^<tC06rСHKٝ akry8b)yDE,10+pTW,\m75`$ l~~~^Ӵ]9Ӧ^4ӆ+ҳl6 [<=+&huր qwBoy4 Bv6q.!ώ8HHIbl昑wO'* ;T^P;kB upvOJl^ڝݵbTj1C "MCXZ=v -A[&{5釈)elxZ= 4D qIQ{g`D_QqYۥ{~s}XqD:;uL`I*Q(Ow>V^^SCBo; mr1/TQ8)VS$6Z@cWGTL$N `# E)H0T AtzϙTXٚn/[m'ĺdx^Ϸ#cfKxlG)!JB;LTb;RL(BPPR%] nF2tElA"0Q!ea ($IZD/6L8:D1|sNE9(iwTm;=ٽ'BJv&[7cv|Y-sp;?O,ȫ^oߛ³3fip9F:VDIqgϦ>=9?eUO_O_@I p'ƾ:#DO?~rk,V;r,8F.oUc ӋT-WIUeS*p=)Ɣs(bVsXf-ױZjiI{*ufJr1'F5Jv B!F*Kv=Z,YB;iP5X&X(<4({UiOa.SNbx;sy$v%.Șĵ'Dz"񀑃G =<^jApұdvRR/5caG_@mr-$/sSLBIRztšuVW ;eߗjw;=t\;0ʘrS Gk.X^SSAd|3q~7M6(BJ9*u@Ȏ 4&U*@"Rf4S 0"_\o+H1[;,`fbBmy}X}Y]GW-JduU_r^|Eje3I!. ^'͟Wwg>e>KCqqTXd\6ͼmZ%Tn+ӻeXe 2v@5\nZ.#w'fNli/ 8蝫IE :*$Z$E;[ tHv&yfڝ[w>j.¦zN/j#58Hڨ!4rvO 4A J" ̂.++kǤ9u5bM%[Yy%,܃ET4Y`d)Ȟ*JZ;";X"WJ%'6bt6eMYd|̱}(\7Yq?#EJi+B*lҳfȇfefN&e j%G<ُgQ ѯ8ktZ]^Je ՞ג5j(]8}B!gTOL -T3f=4 J0ǼV&X{weGXA4ϧu#r8[< ~i/Ul#"ڈU+1yNhHbH8!؝-J^G;,Pg<`Wh_/t)N`ƜER̕6VTT,@Eƛ$J0xdtr6&%88S<*B T\! =QY(<7d=7.;RS;mj-Gi(n%ABa]p x2N3}Rc!0mk<:s"z2[+A>)jh49y(\ d> .YYwzv:v<]B1գq}PkT;h~%iH;$ E2N9(f5jD88VJ&.4CM(lYd> ?:=eð]^yv>ux98۫\FeSж-FT-hJ)'>APZ,m|{ 6 >ZWD QsRy}7骜|d:^҇`jiɨyJ)p.Z8c xDt+V0MW{Ʀ:+iL=5ZYҤ{,Ժז+-F֗NtDfcêR ﷕D ziwSgJy|;?I߽I4-3)`c0\S}!*/\&2Ff^л#ӕJُPm*0+ !V^\Y9uԯ}|S=}Cҁ>=\MٻHkwRK ABel9MJ>E8cI b2,Uَ(hcvϞxcܠpd[h@ikQ@7 P& sPhx[hdLX-Bz'GP89C*bLU bDtqB9hDyq.ٔ\'rr1˜xBkC\%NŦRt0+M6Ң5R6}]å G^qAyz%-_3W3kk^LY\+%B.T٪Q8 5r/5SuGE#]NÛ*;ҟGz2飺j% 8]uBl&tuP aq apS!Q`UjYrUh8,^E(yЫ淛׀EzPl=u4RǺ\OyoТ'HZD 5Mf(G>F<~T 0iw@)bm10cN#nL !!k0# e,RKKJ Ə0l":myz)& NG>8^孂=* +鴫PX)R.*Pz"棴z}6PCWar2M~irh]sh~1a%HLRLwz"ǯF1nAZʹ,"b/2F龏t]yL@sì)Uv\s=TgN$~Qf `CJͱ<1a;ДdѬq\)"ܨfc2V d"tţ VGw s6A>WMt A.Wub3EfҌ"aS2UkbS0^~tTH!Xrfկw"`}Wİ:NI.SQZ{{bh_ۋ5|:{ImﱈgqZަFi nkgqn:ʅh7hʻ€;22{lnXL+P(}Uə\.rhq4Ä{|h/Az0&+~cya^ Zj3mQku4*75/R͗2<1-ݢ.!Wei2OSpX|?U;?HӔf3U=&e`YbBMmS5ڌt]aLQtS{` v!Us})o?-d4czO7.A+`f*L)xt+qfN ad1ttȡ۝=K;ZC8pz8h^Ճ1/잘`ǴT"C,'︄:DZH`x>aAvTr0Aٱ^rq)䮚b95f!+ e9\ϔF?@U-ݣs'I9r9^@i=KRFMf5!#DLIg$a*0WV'tH6zD_$~fǼC}v \ϟjђDUqPJ\y™U_]!պesmnnZ` z2@sU Wwr7hHp=b`\X'GR xaaY" 4 QA NUq ьDR&v9tTt~o__үZ}ř(SY%*+\eVó<]]/כ:?|qFo4<>S!zCph*pYf'8" >>sb՞X6Z qܤk x<l/on5+8T -v^KV\BȺlΚfF/%1>v?<fuW)FI[m/`6~h-eH_Yy) tZ}}Z}}Z}}Z}(fTDc$lmgE$GR@rVFP90W>ͱ?/fnnQHSo.fdog& Y?BϜ}Hvyav;?|aL6H>"e Bx.El.O0TK i_IB 137ŔuQh2"K<`#?$iʨj0͍ H^=Kf  Z1hT)<`[ ;VT& je͐O6M%FÈ6d=d)$Jd"4!"N$SeLwP{ưi+9![%:!{1YD=.&&H1ZFZY4@NӃ쀞Y%*#g7g08ⲳzFÏX4˫O0=JZNO3_}wrh$h$eD +?.훲|#)b4q] L&i+Y%Bk0:L,㜡4Zv/ ZhZd<1FsoII5iad2e`PQ40 J!&ǧy;sʄnM@u避 6\Q^gN (jXSyƱlP "+<1%`/[3>4#WwlSuԉ_*၂ T]*!M$hKi7pC1(c-tI9ebNS RT4A7BPϬ'iS|+_RWi݇ ߽Ya߽p?Psk g+ry%I J`Q%5!EOC)<S(O+8Z,{A)x}Y[{MA35Rj9H p\ǭG^Yb%5u9@kag̰1a8Fk[\*G d<{xKEtFdH.h; MU3{ٻi JEϛx e^!( DvS$-%2{ ln+R&QLMR *f&ՕD IFǸJсVѹ(Jp"i50))Fa:<K#eC8"EmQI.ކϓ~k7QVlVl/Z(ڶ6xq͏ѣC]UfMc^Ŷf}3| 2h-۳0Zf|ödrř1acaJէYN$ن= 6T+wbr2*ʡB9 eɣ3C\\#) %SPm>3g`CsKHo/>f~Gv(ƒУw!ßR? zK5g%guQQCJ̪&l&߈"nK| 9u|U3ri@lJ<_(|k73_z>h;ǚB..V/bNLV(eC>]s;vxnNusGhpv3>gnm1#:mhwyʆnojj.$+V2U8^[6$O^XÈTd (e]`$D#aS#YYqu굳J}Q*DS+HJԀ7Ϭv ߅M .uMY? Ƴ響QxQ1Ǖ=; _Nl,\S\?@rSgޯZ#5Kzq1Pž=EM׼8Ok-ݴ,ʷH`\JC֘+t Z>RDE 鹃Wa䙚1i{PZa[V>/J'~ tg<11:AьP`DHq%{wVCVUIXe41`P k $=xh2B' Rb>VF.O7zVG%aJMQI7wGnTtT;bi?XScWj`I>$nߎ+Es-a(K!xVs?ȭ6Q !6Wz`\`J 2x@Q1:fiR<'Φ`#ϧܒi0AF:J% TrJKHF,.TpMD P8A@[U@ Mږ~@Y˝-mk%5PzlL5.v3>O#!rsЂ$4%u$"`t@0z֖t"z:RM?+/X_xAz}=2X{8-5cɠ@1ō(INNNQm EϙZYT-h}>ͥQzDVDD$+e P튁 'u5yRBU)mi{K[nqeYW'-VpzZr&D x!C e|$ۓk^tUBlIVrۊJW1'6bu7m!9%(7^KJЊ#S@HDf VJ.]#d3hi B:H.UyXl+%o7.uMϛWN(&Y Ps6BiMBRIhUadZޤS~r:p [Tz >o\~ 4;W˝k?i hJHzF"?4>- CW dUA J{Q=uBeXLS`$ Σy$e[7Pr:Ҕ!Xr|0])6 N 9vX!dizC#aapbagI - Tn.Tz $ Ѵs1(z8D1fgBi1Ss 'G5ͅs'EfRDZ)ͪsS+N{-$2Ja9zo ✷mco)NEh)CDMh+s={˃E.S9*W[˽F)q[ EjȾò#Ex'jϒB "%C*! SI0}ۯRq :^:n끪k+RlЍnVk_)r`KzjAvԾo #0jW].MߕJҿa8L.Fp4{;P"jQI /5Z#mYo|o4FA(-TS%-K6Ba*T6'QB oQ"Y5ܱu'{[G]vB$Hj5Y ,ϾctFdX{R/>IKTws!n:xwK$ zA  H@P%EwqWRʚty`ZXWqk( Wtk5]?)kZޔ5pݲk2:*kڀ {h+; Ý1 'ƼYɵήlj'U,p|Ԉb׃si3Yd eP.O.iř5,SI%jmGVy<rsR C:p?׈zTp|OUQOtQ*Uvd2\Ȱ4fa\{Awoqy~}{KɘQ=?zm%M6Lqe#tgq>G|*}oVX.up_ke8>|m3xSO/58ǁf;XLG}S=1!apt6zV6. yK 'tAphLS+4QRJ *+!? \r%e3=Ѩy똓B݆ؕgw"x[@9OI9^x! xFۂjYݒdjF rOdj\t_M>_gd/TTهSUNc~4 fS07 7l߿ak-zKOy .!WĒQ5d"m/d%j41!K Uu?~s]/ts]lji_yvkpA+Ag`9]M4(gҘKZsȍYW|'HnnxG,96D_+2]<4ׇ7l6D֘kwtҷ+ 5Ni I˨h"뎱i2&JH0GSJQ,7HŀPne^n )|nx3/8|TkrŰ9ϬB?[0:i=EFEsJD nu4h@T:#֪][oG+^8~1vdO8}8'0jD%dEv)q(ŞR$4+YX"4iQab¯v؀-`(#Di0cg^BAHxƔ5B>֖cD@5PAg06ᓕSS-@p҃Ikh1c5؀H+ÅP CgvJm`sO[(=&=-@ >[]0H66@%5yA0 HaS.nuj"D*}6 MEϗ/~2֚K]ʪ?{IVtƺhwO.|)+XkligW!\p;u2򕾲_eܧ6>Ci7G&8_@8+Nq0ɸpǵ޹[9uQa=#u,W^n48=F qA9 0\: \Rlp:w6?bQmgh Qude!&% C֒ Zk.㥽o,0Lp/e|QvnN'l%&a{~ஒ^~< -' O,/fMiHh5?m~X{1ipƽ=ctmGLsNԿ\lrO}j|:0 xeJr.F2± zN= SPwT(U ttSi+]ӵ%¨*a]? 2F 0Odޱ l}uDW f4GKs#:4ƌxT;"sՎ" )y(Js%K5p a -@mnj'rL!3EftY}:^kXyE`T8a=-CƽI8I,4TUkDhkRH@O'{c򷲺ZTRTwO.G+*ۮFX/~jd1QLd=Жd߾y3XowĢhp.U -y2,!'[kgsM( w{޿saX,>^^rėhQ %ұt)O!#ߢFIUd7Qk4꧊U|b =K+ |4MZd5VzƱIܢA$n)i1d*r&,"G6!Dޖ_X(Cl?vVx@S o rEjQ;*8H{iTܬ,l #}/ B܉N(M$,>(c?OKRȞg<uC\eRMD﴿zZLHp{ V-챁lڕLBy݀|qo' >ϳ!!Dy穲r`q/_׶>)S$`aD[pY SAХ-@-bP?UKe/jf:ɥ"g®kYlwf6dEn>u26Cl2ou 1].#!}<[,|.g.S,\=kF«ߓӛ4 sw}I5z?%ok'Eɦ#A.GDT!jeM#wi5xüH~.Ec(bwv~d[`*!|B : "@V)Q "U{ :1GN)l,}.gֹ_ERC'۸eNZ;&)ZD֛+gS{w:}g7..E[8~G_,=jMgLxp~(A~ )sNfHf|"\}VEͧhHAV(NZ!>>"t@qaK;yXKͣ_R0 b_$$]b",Q:Z.aPj/TVr $aEh|yD.r Rq8aZN$l%ƶ :Ĉ ڟ9PITPcbh,)k0_Xx¹J3൲P%j 5bJy (j0T+(pV!XPEsUnjVxYfgƞ8NIE(MP*pK i#yyQC#FgH @w! &;>^uQYSAOS9N9 g1)uj~z~O׏>'Oxyhe^cYӵ%@COm`VΡd8 /tX BlI=6D tb ^;{Rܨ ƒ_gU1 3~g. ÿlK0cV-S\M~[!&bmfmauUY )N(/SuQ|4By C &H}Fr6VwQ'Jn/JX7,dQ[.).mD~S2w/In%,䉛hMq2$J#SnN3xuc4>n/JX7 b N~:囍S0i醭;2#sSD xd~˗/%\BOBx1LčJ{:Xq4&zg&UA.?:Z8OR=\ *=ybMʎ>-ck]kI1Ҡ{3:/dGl}ߥEIJ +}^P >^J&5Bbv4_>[ %G~b#]tt]NCH:ETY$"SB[ )`Cԇ:v7QA7sǣ!韛J!Y]>6߯Ϩ$]JC IbX4$]a4$㩬( I QBzOSuBii1u7̡tjG\ b2WNaF\4Rnjb7N}I3Op<7D( O^ //@ V,GЎ6,;׿-uzDAHPjpe1QLUSYQL%RB:囍%ǀ@l{!ҽG{1e%EIoߞ >*D7?"ҲѢuP/m%.~ƢX.\ʷA:60J;sA۾`?r.P?4Bqlx0x_xjDс}t]i+S6H]1T=XAP< z-_qSQx,xxu!t#A K{Ģ>9ȼ09{ Nxy|p)`iXC]Òq Ϲ}hsJͼP[c X@<-:5Ҏv C tXt8Uc˯zdWj#h û~+T !-z' -^?ꗳ tqUU>>b?ݜI-?3^9WwJw|sy'c|{v B^cMpl$'%dra؂ ̹DkL e>SkNE{}v9v,䉛hM5:xjD\ RL']eF-ES[ y&ߦb=.{aj''|>!~HBiGDsCQI MEPWa~ Uɔ'L#4$ELTFK҂@xKTP9K^r5ۑd:p"`CP ,+e  n9&|rq$ =)[ ISˑH'5tr)y %G?$ #>A/#5 ^L$Є0P:" xO+a.)o.M7uVr$|' 1?H7`V4\$2xv˫zHD") UdqZ}i,_& %>m_^w~uuI w_VtE|:_G 4|+['ƛJ8Jk9,\Q"B<ڰkzБZN2ZZZ T T$ZHo`H/*r*QjpWعHB|< NE[OL[0Sw0HF&C~BN#t7L=Hԏ c :` Q0W @izVkB1Yⓡ`y.* QAXotAK3CK9ϴaBS%RAG/>ЁPMU$PԊޢW|݉дp(RTf)!Fn6kOT\Sԅʥ 焮FI*A S@L} 60y@)`z=%@d!s~\hL_iE"՜bG$֣q< 3NsgU}ĺ[6ۋieB_s&I R& Wda#q9fda$q zFy qrIn 1Ĝ"UY1i=I dF<l/?AΥ?gBEq58?` >6!츉:[{J'YwW|Yx MV֎|S}l(ݿ?;rјed_-no||ubjdh]?d![ٺ~h[׺?"geBYn F'BQ1{锱.Ň\)cxQw @El._; /mN4[;ڥْ\y) !GnUXKNk剶+ؓi5TL]ƈɋ4U̮!zIc2PNHĕf$3PdASSRv2 < X-c2sqr&g9#md`åR9obL%}V5A̘ybO@U'I2T+%s@UoRSdmכԠ(؞$g45`jO^E 4Q t;$W T:ewgE?,5kc%l ="?7SfuPC]*Đna٨BWD=-PdKgfr&BX|SOv:I+9VBym$?-T6G/(kAfWT}gSk2=.ns\ ?,??]~Z.?/]_Ӽ=MwZ)q+1?3~z@8773]e/M^_{r?B.4j qSm+4?L‹c")"2Dwu*}2A isocEY;-97yl ׏G/ˋ61O|6Z퀵$^mUOks9DcÈ>QXPraE żu7U?y/n. eW(⾪! Bbf3f)6g}En熕䎏?r_ d!_)Cӣ,^M+inV nHKʏF<=4BN*Ъw$O[H@2WO9ʖkξK2Wb BI扊#/=QP|Z*vzL vM[a30u=v }QQtĥahiʰO"@tU=Y}# "&{vl~_B9TBT>u}x)Ύ2tlZT3L{B)x#p*h?[*l }/gmN|w~eJ#KfW"JL"%"Y6h8+g&fD}6Z߇ҝ)hE~<9?O3sIJR/~ΒZ1*{#>HkM3;yԏ5li&E3춚ib.;{l+X%m\c1 #,,XEfE)9bEXspW &K&-g l~} $!/~ V^8Řlq=i>R~ۤELo= 4Ү'?Fަ GIos=dԄEn9G"~[kE6vEH]'s Oioi?Mp9NxfVqswI#Єh1NBZ.d4' %WDt%󂨛(]T3"1 AHkʠ4v;׎?VuRL+Pj 9>t[JzΉL+fY%>y2%ù[\`-h!z0\o6~_,7Zx_qf77M6Kru{qn},@^T{%T)13^hx]_;,dV TxE炵Dn}:EcklcȰqeʃqu /qu@Rr*'X m@ӹ؈ :Xv^|buP z>PUoՊA's-nU 1u6CR Zk+mq S)-1WiJh=UرR ,xU?.O)z; ݽx/y,^59a@5&{`d庰dYd.,Y’[_(E%GfwuyWO]./W`PkTԵdۥT7tQ.ڢR6'(SwTQB)=j4֭j7pjpF:tx+֚cb/ڏQϪp4(Ew䭷 P\6ު!(I1` 8@JFg$q6RZXE(vncUOoy.J</5h!"uu2jA(D+~ isU@吥;aK"ݺ&?@5DxMVhǔ[(#U):r eTOÕ|?mWпL+Ci8Jʰ =G(~sQ!ADT52vaizqt"/}~$}~\!/<nWyZK'|3E&_vw$z.5 ZjJ'~,YV\^Qfjkq=,ꗝ+YRqi5[FIB4+:=o qATۦXXO@@Wtr7%p_.^X^ž}tA%cx_J;m̺oj+7w 0E&TNmi(4tD 3J@R2HQd܏kp*^JJ9Z-9g:43j)- kԪa* Xby\c5+ݚ0? X\A=27_QcI#.Z?/}^ڛCk9`\oSŷoB_2Clcy?bdg!%iR'VK(sYqg~{2Rپ^3U<)rJd%ej ,,/r9^r M͏rJk__s֜ 8_ ޏ+`?,lr-?~v3*T]h7_żL}`eě翼F*gʤ," 3HEw֕}?15޲sȔUXZAKՔ4E![{"Hwn 27 qD75u7m)ӫ  \ H?<+ϝ2@Do.ˇ}o) #tbj|+j|7bp5z5.zwDzE* sLy W휻3D[vYIRX1EůOLˢSe *PeAV S^a?@h>T6FS;ĝTk!Wm;4{<V4pd}]`)p:Wa3%]XCx9Q6kS?k.[_7x𽯖?ƙ ox J#yEt Nܚ΄ṕv&PQI]p]L j@ |8Z^9b}T bѦ>x6.WӥwЎOׅrM)?i7TLLAb#:sn#:)T buЏeݷ6SJy$6Nc"X~yښ3OcsdNW/T.NFg& 1V2=5,Z1G!_tXQKw3a&gIց.mtu4Rvs =H_(w6wylC)*{[G$Rp3wST}QX$=:^Mk>`wsSJ5Zfi4Er8Yҡ* 1ۍRSN.gX M6Uה .JתAo_=D0tg]tG19RK9Yj<T ڋ;]ŐeK~qFaQi"$Q|:ͳ7ëO䛑EWWot_ ܟ2̏:5}ݶGzfÂr՗о$Sku !Ttg%HSNtBK5fu ,3I̹ lYA?Z5*f(E՟uh kr}>#W+5ygż_.IVpǠ>V ަˬ߉c_&GE ZRjX3h(W>r{ER'zw)dZæg{Xwn'?)zFH]~ɗd̫(i)\|Z\M8鮶gJ@cY*?oGZkMU]u14cs{( էW~y=I6åBi'j6AV\g @~[bvT,胊߫nnF AdqPx* ]A,uj)9i&"ӍyCb64m18!9QNhesZĂ (Wʅ&te6v×ES'X]mhb\z y@r­=' *uԆ[8Bk+8=-73Zj}ZNngĽ G[*tv~7Ztd_{B¸Sߪz j5 }] xȖE>| ݯ:w#3>p)upᾸo}}Q71eȫY]C׎UoͷjlOxXmyGo7 z75'T-*z9gRO=KټYixm=Ťv$ Ff i<Jɉ1Fm`XGԨy|%=(oȼ')s"ʧQ4FA=:ݐsЩ2 i}ȜVb!u3=%8Hw oYznrZ]gnM ըqj6/좲GIɸ?\hZP?n:Y2%(H!kRˢLR]-_}rHH i}rXI-rCċ! <W(PRhFfoy0BRR؜/JPÙ?9QV9%!= cN1 :ܭ׀yiK"H˹ІXs[x "% @5]iw!ڛ#Szd*.>ѐ[Ւ!yUez[Wn+~r_?ܗ`z =9lE޲_iayI|Y?&w9R 鳫?α<лpsg?yx8!SBN bv=uW-_*7&^K^;d+\gw$!\DSd +nM81GnĈNmۈ$`-Ec[E4A?4k{u 3ȔMX6Εxm2tƴnx$|&TB Q71 hF(D 3Tn ̣nrw}m @na{~;+7BW24?KyHohQzJ ^‰"eA.Qnm˧RxzWwOOÏס %3F匄p6[o_1x={sY:Gz$ٿUk>_m~uRT/Mv=e#01S\j)"K3D}P%tKJ/)3 =^$Ӧ)haqM[Ԍޯ՜^լGoWMjk5k2:Vӎ{ϟβO1irh:%o'n@ o54 BI w1N^[i+ lAO-Sۀ#d+~ _k׷W]mW*rخk`}9ZH5& ]o×ܹ%]˱bIYjO)D Nؓ#(Hb8RO3 aG/LRIY85"ߒ!r,4o`WV_g, d^.D P,wp:J6^JGIQYFV*hj$w2ZiQiX9*[k9ǃ[L2kKJQ?5HHrP 7RB)E-@s~uws2e Τ%P:/ޢ NxʤBV)\9+m,gT Z؂iˋz>Y"0^4\ (~kR [j"Q-t!8t|r!~dDa ~gBҵHQ ˹f[Z(m΢E 8G6`r%tq&PI DR3HN/A=iiɄ[ H:'-QxY4LPW t.w*2{\,!ڃ#񳟃1_GG[,9L(pmyؾd0 lE> 9 dzY@Rݜ4I i0zQ%/efSTco=tEBp` Ak脠M;Ζ&*! Gj7G7 \}88Vb;eOhwdTdT8y*I n1 ŭr]?[-XA3>%,:8 khQL^F }G K1-a"t2&C{isx{EAsm gl\O[\}b:SH@E?|k(&5ފyFŽ7<Ž!}xSƜ >w"Ҝ ,#! XN[9N1 * 1KǥmQ'fA8=tZd D3y\]}לJ&ٴXxVhH, u k?8 xRI7 @0B/%8SZ?{ƍaJÀ?p]Ĉ7Mi4Wl(MKbݭxZ⯊ŪbRBYfaZEsiI(VKJVa`qB)%FDZŐEҀbi^VM1NdRMEF4_d~OBBcg)sGmNyYDx&Xp[ʳ SA!P$MIejS$Irޘwk3wVn#wGneoSotӓaa-;6Tsʃ}RX `AT{w`v6w0K v0k@BqM)\gN~ݤ-I}Gv}V!;n"[hLarvK-NHN JUs!#62;S)Ch5(#a&7DC~%{B:sNI fH}G;|ڙ&mpD)MH7.I2>u{8 ϨR1wngM wFj&$eO(JeU~m;~îlDǛ[sg>.2Z5%f>m~;f)蒰|x{sko7F˴yXf6w$pŊ8I9AM+ζI Ɖ'QƚdKz$9qK+Y%Ob_('̤AAZA kV9i\QI?e?+u+TUq(~ A9dž{brywCNisS遽Ń>̧ Cy$JYeS MNJv/fTo| ^81B}(y's7]]_E<ҭ-P,1HA-\PdSPJvruW)$6q`k}MqC^REcU(?]G%\ӊ-ώK#FA7g1nMcy3)v0KfOdy٫?uu s@?ͭQd^m7~߮oz9B>S/Cl7#`w&X? ߻va&8~sS=#+[7޺Mh^ IUxTOBKuUaNOv:v_ O+wZ)cÉPp^Phh+6Lу69[3jY ׼S^ZdX9=Ԋ!aNtN%ayuA-'j1u@82_+ hoFо ?ògg >jAO~^bJœY, |5>7ap?}T|p<}0O38^tj\/ߝ2yFhxq&3BŹU[kUaH!$m dXK !@xsijxU[%S8mM_:?TT_8kB锗%3Z-za=X I[x^hQV };| >@EPΣ_jQa_܍ Լ|JD!ָ}LF} VL~cV#e%P\kG_Bgۘs5{J y)Hj Df>'qb)DSխh]%!JԂr_φP9Æ9ʐ5n3F<Ž ?HC1fEZWX=VXZov*nvxz٨9}FQ6_Te 5DqwD xG; ^EtMߓRBYJ,@83xaxasVSXNY!y.2 ̺$:N2:l f1! ~XxʁWHL:m56W0GL`i+Q.iRͰ(,N0OQ)>Q͆P4ޭӰG`q 0v(SU2  wf6;n" ҴVh'n4D/'I;9QlEmI#4!!߸.Sɚ,Gڐ_3%hS "Oƒ$WʹM[p IZʠEߍ8Bnh[jy,=8!q/ ?݀H~ɢ ҲԔnB-8M#ns'0 bRIn8%Q\ p[R6d n9+2QLP`K,YHW#D  Tp9Xa|iib?-fW\#a"͘)K%˚9'D, 0I9+} 2;04֦y˧k84~sT?G޽]}~<0+x3G߂enDax5OY7W`Lg%'^_xX ~gbf,csǻ;xXA[=y0(S\h0ZX±bPM~,ikK^m #* ib*~7eJ5ץ$.*sQ\w}ry{*S9yYl~Y=PC>H˧[U V0AwF ))q 0l-}fx5(8itj1EVandd$y YÂb\ w?}/QPT_9rZZyS GW-R{`q WhD#.$VPcL7יy}aEaǣ47<ʜ\hD߄Ig>9LWL lϭ?Û߄aȁ< =0͸L)t$}F{r |Pd`d %MN D<*2ZzعHȵH_ >04 5dEn>,\fSTl5[Kq}kO@&E d_tS;hr2n>yruh,YA bALzaix(f2'J5yg}/r޻av_|}2F=g@Cbf:IrLfF!Mfќƥɚ9~6Κ$^9T9:'kn4eOfP”ۮ"gx #Rf> Wens/-uq\Ϸ;($K J3W0-HNRzKϳxjۆ0`&w9כZM-/+NПǓ^ppYɳ5G>)el7 >NA5XkRB@$ b˹ !0)j濒þF 7I@*+D UFQ#2A5[MXw-8W'TYwTZg ON c.<܀Ȕz] )_gBlQpPQU v<OI2XfUzdL%7#{Cuh%I~)JNxu*)i!hMp^RP׼dX`|!hr>:'L.GPRUSe܄QnN&xEDPIMwt1 v 34bHCg!r [IaOe2k" rC2IF?R*0@`ͺd`Ljy2Z)OYBy8NfɅA8GBGJa0;͝`ZP974y8X]EZ(O.2P&/ºLϓ'd9FfVF(D8wSN5D֚3?v|L ]Ǭf͘9iP[X{ؖgW ߅k"+ڞ|2}X ݇-5B}#ݾ.s|z2@SYL=|a}\dN'#FdE(5ZI`iK켤ϠK~3+2@dzxo͋ EQk^C݋S?6t{mϫ͆xULo.1a dcZ*=wnqHiX1SZ8;W+ BnlivSc~<ۄB%@IkUɮ^XpJ od{Vt|Uf^ܵ[Y4ܮ61*-Owj2Q= ;vn'^$( [ٱ`=m3UP8f9Wg +e_L _~%n-vӞW(l5Jtic9ͶNh2tpuL*. :;jBS Ԃʓ]8N8\ {l!L6F5e7sM(%(ճFplPζ,ON&wEBlT+DuM ]koƒ+}{w~? Å,qO^MvS,3#&I'l>v,Q9*vy;Jupc힌bp?O`.ҍv@>kK^ zχn sOwF=̷ s4/K?|XN7V߈r@܇X}oeo<^)[z1}nƬYgƾ/~=9#)^hobF+LC#̇hԘn ؇-{խ8N`V* JE?9!1Dw&ߌ#̬cev=>9ɵuߞW7"7[GI:x~zGxέ/GbL"[12Sp]ї7A<^]e)CBY}S%-ѥUB=-O4_W[੄;+X*M&p]0c7A!TgL Rh$S%양3 Cؼzk>Hix5hZ?Xu?i8LE7C`=Eű,C AH:FVnݓ/f=D ̉}fǑf]C :*b`AC%ad= !;PHQٽDX3Y `"d41KsÚDX *%NM8m[_†^@2wC?br\K Zf?g`W$S}Wtz'wR#ZlN<[,sW[8ѿCҿmtADb :{U˻vt2;1iQA0ص:Yuޔ`m#С:' UlC6B * oqZ% "EWa3 ` y-߁Ҥ|m z oIg5,?g3QYE^5P/ u[ˇUG"9:ˤ@@МTg7uƒ@7:=ҢЕ}sI:T<ER\ :$C驄pԒ;<M} q"!w%N=TNI_u;HP4+ONɽ o?Զ@AB#.@#%uf9L"TA((F۫$@H,HRBJ:"P0QY\gyƀאrLW ULsŐ6P25eɎNe~N&⻖K&yeKL㣛! !a)c!%\P j P.QZ]@]i!@PƅQK()K >_+\c7 +II CX |c""#>VO&Fșј1EKm;5¬fGvckÇ:oY˽Y'j5= %}Ǎ okas/v?N>-nʮ˽_ _W=yvfvoc%LvNXコ.hvTZ0o4"mu8c J]zvy }ڍ=0A=Fșш1ŠIK) s1$-{R;^Qzɧt,&[JU ->!Xd0`c T07f1\̗]S!)צ7\n, \ЛHA )FL l1ͧRs ɫ=N_W_\8q\.zVjs+WjsȆ>@R1ddh OA 5B /1%hxbS2%H{Щh8|:1QcJ 4! +I3. 3nTHcbZa12!PijTzZH/%öj,>$B  Ĝ)!+BH2L4@AM3#!Q&fͤ]Av*d"BjJpcHP"Ҳ ]Jj82رVFd6\0"T EuB!~e/1kw! ҒCo DSxzjQGX* 'Acz /jXQ jO;Bؿ 9 Z#x |nu0ADQڮ_{1[7E*a.^?ud,#&010p6"I mE4?;>xj P3KI5~[,_Rll{M$"FT 6_NS x"qLjlpmyH[Z8&Nl09dCScpqJJR}25=3hBǘN%L1@ }O60eΈ@Ttty.7#3vvt.4QҬ`d4CP\q  0˹R<΁r9Dž `N2} *]hN q@&sγepwX 0vc2‚HIQsnJ{y* C| |^E0 2c$"%AuW fe6;P(燧-%h =fo/ ']Jq=*Fb1z#$^ b9cXT^\|7(2Xb|r"TgyE/f$> <1qOeyU? h{Bo5;5H&2fON^jRR%2HO~'::f#IZO7?beYcLq`cɆt]3YxxE`R_L'bmtW;T UTC*f2B>'>\ ~rջ`/On}O,!>?e2@[Y*f)Yϗ E2wYFG= 9'8ƻi@@J[ߓ- /5ja/;ȴ4?>|ṲAhov9a6۵W2so@'O[{WF01(5vͣu^n|gm'&OۮQv.kw@H(iN$]Zgo:aNf$rێOoi;v Ҹ1MKܯėkA@ #j׽ j _1a g9@aqR "DŽiE" R+eQC$#ЕtEa&Csl^nzXM8p9)Û MBBAf]\Rd%R;wt$\Qb|xۿkӵY}#ksb-.qF1׫e_<*>60R|;i/Ԫ VqX|=,Q9!5J@n3ه-{խCrpk[nM3ǔPGM`嘹Q(7ȟ'ǯ]nsDC= Yͯnͭܓ{EcW{rnac 5zG&v E 3ʉvtbE` D~*e'SAE{t;YU=Ŀb+t{dg25 cE꺒 PqbF¹ڢD=RT3ddҍ;*$I]X a LQd$)@9(qqh\U`Lj?7wV Aǚ;^6w^1a&.۰L5w^_@3xg-vH®͝+F=t4* 3-6w>{S ѝO6w@S!JQ$cNg \ WQiU8ဒJT4x!8wJQv~P"RQ x>{X%KM9X->*Xn{ fqaVہa@3 %e;nb%Q~i/UO3)ԂWYhS0(9EM+>Ssu/VༀARw*KZ@}P鮱M-:'NKHlDPI&p@Q`B DEBZZ@]=Dan2AT cP7 >EiaAqeyJHpT"15ؠtBpzE0"٥!=cL(oJws+uR1DqKRF*ERg@xfR?Nou?trxMU54U2s $4JE۠&jmp"bX6BЦ3v6y̿NN'ʏD.țM @&l2*mm3b8I@[!\H Hy024_C"7w]#1|r/M2@^egm5`I|+PQ o9O_aE_wy:/uMp"nK2o &llG`@v͖=vQw\T#[K7%e9v=77X[w|w*C,1b>׼M%f&-mmAh&4q ։sWNPBةP6gs{ cB*b#OW' H[W|;IBp}ddz!TSF׭ۤQkNɚLV΍9'<v*k[0זo|/wsgCvx| xscJG7/hFuT랁;l?Pr:8|Պilq@w+~8,5dA9=ezrF3Il LH2)9~$P)T.;lJiuj1 z;s1iy/!yoʵ5z+LvWb5%}J{õ<;1@Ԋ2:!a 6.n0 lS(6P{͆w<yKk{9Ŏ9*[svؚ媟ؼGN#&;:&u!XU_Hd'},$zA(eFuR=%!c1.|(ꦱjp:ZiLmHurDJ4hX f}('f&v `X|:b^/=۽gp.?M٨^(HQz&PL" uFm{a@eJF>r)Qk\HYFEqIa+m嘂A(T :r:kJNj^w eVn1[n<&+R\:zQ m= e)ՄeKR}clfB*-GxGkT(|$^*9%>,[^0Olm4Ea q=p4:UEE|GhhYG݆<R[!%&Ad(.2Z%bBV=-@KZrښ^RQ$u[@ղapJH Lv\Hڊ-@D<ַ޹Rf+0M0aN[֘Ef{,5VJ3V <͛O8p!>-կnn--[zIL@#UUەVga7fy>?6`kӞ߼5&Aӆ&w57Ӛ&hc[Mݥz"K^UjG=\ӅqA 8⁒|W2g *iԣG?$Fi"$NsV헊9#sM80W' `'fJ45efHFT kJ{wѐJZ5ִtR`E}EttE SFaIVt6W#)Xmk5R 闧`LZŐEΕz% EHrO0*)`y:DGm݁`1.:6NJdz/T:qP̤y {(_6}Y?H_/݆*"̤bաh_5撐iA V0+!hɮUAh dHMj ѡ1}丁j kU axǮӢ$r}\apbgPR, g`@2+홙A6\@S4jN ߛBkIvi L taΏZ%jv88 .s [@ 5yl͆w,hsPÀm,9Hp@+_((^@ mǙ/ +{ 4Y?dP񄸾?U ~Ǟ;-~Q`!)7O3tDR$pC^8C)Ux yq@2rk6GwBX$>Ϯ&K,Oyp~Ny4_Ͽx{{gQ/>ijo;γAϯ߽} f"wؾyw֞y>{"q޷//>ߖ”AJ0{W~~- r̓֘Z=cD2Lκ#yGMRA Jϵ- ?EF7f ,9hF$ʰܿw7]'qKJ;{z~%: a$TFPBr`EBlj ["E $?$&qO\U&Q5U(4 - "F6d}J $4(-K&2qVXYͬP -ژuL̞]㲥D)J%8@ [{Rv"%Aa燩d/;I<p~;)5o,?7gb?w.<ij7`9|gp5leڙJC>tO/}WV&@) W7 Am>&m4!I GN<x1QKJS>-rnFUJ2SlP1 T}J bY1+*h.P3+Z%VYVhcDuJRҞaNټ闡]ר^򧖶!I/솔Iwzu:p{e;ҁRH_\|P'6Q.uǟ4Grç 6O=zFgmH{! %H6@鶉P#)gEU)iHTp$lG+ oR~wdd1?rG^3ml;>5=φu7& K4ߒDR0:]166]B:vwGvfAH;p~T%iz uK:椷GM+_Ar=;|dւt=b|xzv6t)ۏ>4em-2{r/zޠZDJEkwQ !~Jb-0Q9ԓl, Fa8l2!:2wF#A[zdaGMWpvB=݌9DjIH8VwKzPbD^|w;# y!@ZST遹[я^u?3PB7j͠um5QH?BRZR‡ͫ$51D!V (Y$) -2.H3_.+_];wS!|bjeU;bV5zyhPp_g[E.긷Y~Zо Y pC&E%!r+oWfL)?} y|xqGr0S6 ,#.#>(.`+Fn{L?{u͎9籔_VXJ lIBp͒)-?nB!h\N;h#8T[hLI<Cn4HwnGRE*x-AvkCBp͒)*iP + FkA޿9Ԓ3)D;S5y!entɮϠ ((}J̵x/kGqc`&Ӹ6W{\,c.("IMR&r&$!Pg"q}VShlC}5m:Ŷ Q-LJuoفh-gc,ܙ΋M^Ɇ1 GQ/b\k-~og!98R\nvj`,>E]1B{_@ٷ9Jv_>+,Npv7O\{|tJ-rO]P.Ҁ_(\ 'A͋{ ^2nCTHZ*lNJ͘ ;k=1LqK\SEAMt/l 褰_:MJHCP_lCYIJ_r" uQJǗg؆+"ORT%J(ߞFquN0*d>ۍgV(`=,,YgAR w;͉1GὼWUH1zO?,ӏWe ϑPxQ-6Y,:%=%}іJ,TNrx`r_aI1R,49jwRݕR~lR=*SZ@-7߫\#SJcwR#9´O:S8~}c4"|_RfL:xtC~2= ƹ2~> Gx~3[^J[nZ_W1_y0xZ9 1]^5ewVĨ[6Ilٮ'zJ\R㕰KW/:gWIە%3EN}A/Q`'NYG9+,nO[/Ќh"UvC>9SlNz: \;܆WJj37 QFzun.1J.+"l 0:ws`_{ѼyE2eU4pX ,%Nz"1 AH](;©"d9|:֛ù/:0Ķjڣ i]ݽ3E74i&hŕtMgDZ-mA c+!kq@Pp'8cZ'6  ۿ7=h윿\jCCɋ명k1JGPȟC/|{PT\1γ_Ln-|0w+Sl_fbR#Uup[l CT`&" 7JKM2HY.ƸP?6I$#&AϺ7 Ư@.F4t"Ոh\B*K%Śca++$AVBy4fT(#5| c Doe&,2#z ΖǖGlJXˏQo)Nb/x ?: c}?W1Exbȸz2WF? ,qOIaMaْǻGWo9V@,߬ԁ 1ERy#fMYdjvP] ;!El)(.HZ q!1\pB>l6 FZMkdO=X)Ѵ38 lAg9/eg;^GX Lwz{;|ƓKv7W=#&govZeV>ޒ cn\y>7O?cJ}jNa·V7{թ[s"-_΁qFo\-$Adְ;b~EU2K_dfi )%JgՍJ#BH kĘ`Zj=&!])M|% ۳Kd Hbq1?פ/oo׌$y;L+JM;#?{"큛5l͓ ˤ~_)|'Vu*+3ai;;5CX/O[^.MMRCJӆ&/KujfSl'M"f1*b;/{xTvQ. m/H@D:S҆&L!R!R[c a`0a%^lKp1c: rL1S͸ʤWWD p2uU֜{YLi`؞%D E0Eͽ*BO''q鴱W+"+ԓ URM8f8úԫy)L0*[MhJ9jJ=ռijVml˾i?bP5BBE''D`"RZHs>jKa[< ;s @ٯ09v"@ q‡'PD{Q6bVA 78(v=\Mu+)V@ )K3UkA,A(p8(`A:@Z3#%B[+n j5cK !/ԃek_ef%U2OR򤔜6^7AP[i[Qgy4k+%bV$@?,N> 57N'~S::NypXHrTuKi7XL*N뇿Btb@pԷ.3/hy8&ɛ}W/__`@50JjwR^_.0:IjLI T؁Ch5|v^̝Ebpn uKdN.ٴmm=Sx`]bZ&Xbw)99 ?e“5 $e'^d2YIa᧌0Bx/Vo*@Q[Ut.)rd ʦBӭ~#۴cMQ0PLYG2sP~wwĹqʛy=rɟ~yIo7p]6V@C@uݟ.xl,ӭeḨ%= CA=alE5K, t!t!(q7YaI$0y-?;2rwȖSTuɢS (jU*j3{R #> AҔ:̛59EAi+OaFOlNXCa/o&Oz:{D#Z;w8־82z\ OO@0d$kDU!dQpΕb 9K5|lLܤ;уr֚:쬹Z Xbel=ɛ|A#;+ϽSXi.d޼s\h2!NɴQ[`ʩVd}\ol1RSeP;_U$ мև+k<9Ԛ$0^ M\0VbЉ3V#F msUͮ,/ 乑hj ^֥a!/@M/C\C!,T'TC$1#j[pAP@2ItK)bqFcNC-A,D2 ,$'l*iqubD8۲yz'Vu V#@ E Q2ܒL=g7hI|`Q'VF{u]MɦF'n%F<-9,m]72zz1)HPN!mņ0RF\S(I-iI+ԟh$O+ ooy{מ0a~޺ҙx@v>.}XAWTǤ<>;ԌU2&% )ϔMyf ..۴U6P!*: XY6֧=LaU}=QI\wO;ډVjCw~ `ӳM޴ޕOW[ߔ] -u껴EًRKTw&p0VWx(CJNv{;D vm~.tʑBXM@uzX}OF;kB7xS?׻ Az~X 类5>1%Ʊ ɋ?~z3h{q5ڟޔ3*s^ҌvN'}ھ~CSuM7t(c"Ι{w}.ȳzOa+#Jdڹ"e|Z>M;w]9iU-&~j3Sf̉}vV7Vt{zerه˫m7Rֿ\|-\~btKԦpՏ8~~{OxSt^^b$xx8!^XU+bsd֬߮DM?٬CXd.!kYH!88y؝)\2ʇNܒ+uG6:rHό/`#~Jt )lb#\nյzZiz3٪Ns/17q2ߎ#e.|qzadi&=sIyuciHX㜉#r?'G&|c8g<88:UhRh^9yd<+vd|`Wo]لRt6&EHqhtOa!:slCom,AL޷˴DŸ,yt'*O qC6F)9f#؛kW2EhQ~isцcx/wN<ܠfϯEyuE2>Y-\ui>5썺cMV;kFIVQ(r|[Ffތϛ8,ό{(?z5_OWֱI {kc/~wԥ_ V7{[uJ[?/?JJ:SI +r^2&_?Y̲{q/.2X3^8Gg>X]u0λ;}bDO} ԏu_{gP{JRi $+nZ!=rl"g $cG6m公K,%sfaJdkGE4%UUҒ~~O3 #h0KN :XT}QLRh|Q@/)IP% 9e7+bꗷQ|l?6M,6靔,* [sN8_%md34~|~ @r8v8@QC3gFPv* X0i=eAbv CVHڍl [&SF%zn05X,;9=nGP Dzo\+ 8*iIo>$B-I&pI?ǐ\=mb8\=J'Z)R Tn\_w7]<|lx$h|l%udG6ԊJVWIyJea7e.LcOM6 n1=5-}V:5l<[SRh@'w(l+ZlXsrjPL\#O*ktݴn  Fxڔ2%[6Vo+N bj|Q.s=b5KP{@two :=1Xn\$HHP3wɦC=5s]f$gf+1cPg;VJnj4܁ΐ@Kًfֱf]zJA;ƙEnzcxlj(6A&tJj?Bj1IKHڰNʒF'qE?-u=iG:iV [N5T3}}{7o9r#8x5tmTne;rxWEۭRŒ>.rߺȶ>sU@Fl6^hlbDUWo-;+`Hwh"}ur HFU@MzO 6na @h -~zDM DOV}֐1=͓%ч:k$ *butek|W0̚Q?k<4m$ϼyi%vǛu}.#[,s9,5@3"z}@=j>S9nr\FA6nOƶYSe -m}"[ ͦ>G}twJnڍFot0gAdތ޾yG][\7r+ pZTH~j bx뇬1՚X2梵%֐}.}4}vFGTW,EW_"V3bJ$pYlNקQ^DJm0Iq3(!F%N?>xʾdAN+]J5qҩdg/)qFOǩeUXn*T<}}H\bۊ# h3o:yOSy9æŜcH >qæKz>)p$ğaSkF-2CJЊl1y%xMߙ y`9!Cqؚ4h"hc”4[щhe<ƒmK%DK^J >Hڑ$ΐ қd H1/)maPzJ!GWKZ~} !_FM2BrJCʔn&QDQFǼZ cPG9GPzi1k{!MVZ.'YJMfru, #JCn@ *uVhP?6ƂiR+ΰ}ih :ĕD=׽'"m;c׈ nr䗶oPpPbuJ'=kم56C60zՆ}zj4PY4w*T"͉5ȲN]$f-Anɘ$bƻ(4+>8)Dgs(2tCA 6壏1u/Z#ދz^JcVaB [U㌯=P^H'z:x 'I1W!yEB⪸΃FMLL^ tZxDJEd+kPR.UYH+Hp5Q?6y$?BQ[Upͦ"N`O$tSYupOs+#m)Ok[*]o+n;;Mcd}mPxDvqYl.9#=c|R#ϿduQvR1wwKGy??>dڴ k9\*Cd8YDۛT7\9*a*ގopd۟v0>8 M hjtUZ?So?#ӧܟ.rsyr^6iqMjGIbk>Lrq_ByY\eq˾k QŐ7JƄ3rJT@$lpC PFaWh5?_]y& j.~E{ tieG ݛ߯\\?ڛA3hꖚہ*-:BХY c%)=qA';ɸ(C-&8&iS̀~xpQe' ?F7  HfmP?3f2XK A$82 \Hx*,.@vB*QWQ TAm⇃Z H<6JSX%pmh?Ozx|feKM\w_ӽݣe^({? sd F%!p8룏&Bp\\g4pP[\%'+b}ĊK`-IXo6ol'$=(\HP Sw9`] \QңHUb/Wu6@f+/xV >zں^KOzg,X "|p.7۵é2:ڲn7_|oEՁ*yP2uhcZ(7/^7/_ /U2!J"ЁA˻jo^l܋u7/߾o$2Zt@FC^ȱs cf sk@̿fØ[Mʠe}t|<3w7orDK)iZO{cRE+q{!G?^ݔ<_ݫ/K6~cʝyf?7ڌ ¡X ?܌m!y6;hC-'QM/y^Wf>fdqkf&3W'hڼΛOt3bfކh"Fc 'uk-lÇHe?}s5~W -Be&>>hWuob݄\X _}nvwDb@ɌgZX{[ Fu9q$ 6,F#T? RP.wՠ@6yLf 6/<+嬈‘ϊfϝռYLK\i-JţZ f*et"yҙ,z(Ȭ5A>^j#sR<7.H:2,Tp饲A y:%ϘQQ$ @Z /DBH1aTH$֡`62Gb]ܕT<^Ml=E U9Fd YPYm%WαZHmHv,+F3!e?VfWP`IEV%`$b2LZ!R8?N(b1.`%B2z!=ePޣ@R凬VIvpk$59f/@C23sg0H \{;,YMIGpv d偛U3&C~=v F9xf1fӑ\w&r#J؆8[Wǡ񐞝a9m&tQpҸvc,ڒmVbs.&5{A/m&'Ro?<.chIu,j[}g7\a#m7Vx#@tJOR;B&|dC"-94| (qQJ FDlf@KI k!J<`FP(-h1|153 nYH ){8m42\(h,čl[6^.mB}>m氈:"+r(2Jk#ݞSBD\{d}I0:vPV Ou5*|kn5N*[1}ӶJ6Ŷ=GAbLA<LVJ#9> S'cP8qgs̜AȄulPJ#Yo ?to߬!~nWAȢ]t2:۵I8}bjlq^_u!=#xxfհ: Sb>ŀDBA0L.Dp>(R3 F߽dW?o7l߰o <_e)ؑt+&$lfGN!k86mm*##sx˼[py{=>:!K* Is+:> uY chWR;\[u^G2q|`&aeX y@q'/&pJg9=Fn6ғL nrB< ʛOiF"Xy9нyw{X~ssvs¾Vk1eɠb4,X}x ˳.gf˅uq^iŖ-UvtJVy5E-Fw%$KM䅉:}K%˓!nҞU=k4co^;_|k!Fӿ_>ES؁o<7bIZէ$G.=z ŕ󞧚G@L\{hGc^eRRS>#;J}OXΕ⏛ٰgm*ȶdx](T׷+?عx{StyY8eq˾snouG"rfwF%!Ұ⛁w,2w/ žўldٙA~n-T7)nl񴻋E(h!Jt V +Pyfa,Ti 40ϧ+ϫjۗ;|j2i'HFUUMi0׬)n|sjZm2peهvg=Dݟur!dH+^UPcuUUЌLhE2cvIPUFAٍ$C}mV!ˎ+!r-"H+]Y?=[$ǚ]8)UntF@Rj}|fmJԍAJqA($ƒhz^*"Tkc|ҮB㥯:Sd/ӏf C;zڙ* YL[>.3I"%[IFAz*G"9_$|u+d#@$D!^$A_G߇D$t% RBgxdh plAO 5Q9N2DLKu^wC @³Nj̱\hp:Ԙ2J%L 5^TzB2 ξwvx@8~bx 0H3]/z7Q(4iOIY)XySiVu6+oT|q2~چ_۫Zj_]My,bK㻧j/z)?R&fxUKok{v{LeǠi?JMLeS4h5b[ ٳ7o_X$ȼ >㠭s{?3.njH1aۖC:f$] Edu~W tF3hnN6b 6@TF~maF^8 DJ8b24]AvFUS(!ѠZ:wɊ2z*G8*GEE;tK~8(z/e>.`9],Yݤ~X/1?>_xG[Šd=BB%>@Jp죴 ibLȄHH^$$/yn yG{rc׫ƶ =~3%HqbƱzX5&Ba[Xj_> r)T  m?`X4 LY>dSȦ]xTB Lu;b Ӄq1p[w.rt&fƏ?Fwen6XFć*"ayo C"p0ovcv|_>7 /gF<ުqqNG_~tDS+/Qtc_y(3zYw~+C?j2bW_ݭHQ%K*o|2$O5>b];n",_}&@@ji^%p-aD23O[.80WV^9a&[v<49T*OH. =C.@xTLs,'R]~LuLC17ԄeL  |yQ@5:!]Nuώ`alC5^FP)糗e@3rLVIo x.DBѾ @+)FcJR+v8K Cc^EԄ_x5MP k*J*NKt&lVwkeViiV0eLō*J( 93edRduYgE畣bՌpc)Y5fg xB20[-W4υ %)Vu;g]UbB=[]ja&~VZIa-J)(7oet& k`YZfL*m_(J@PgÙEamc0[vZ3/=}/q\Կs66 \;:`fZ>-N+}W-T@y%^AWvueV笇'(l)MPµ|n;Yo$I\`*O+0E+RjԿsX}/Hu88A&x-$,%VuzVjmWSfyp''E3. ' % D :dl7 @wft6 nMV!ջ=on9!lfhay˿d{X/)G8>Y/ngIyVU>Nl.'wqDQ(SݧgwgpJIJp8ߊqqcRS&6r)ptgߍPQW/ 8~?K Dd {?pvxx>,? H-H GA=+q="K|h5} N>$!.)lU~KvqXA &;?rwNtȏ y-RB11 OODGӱq5oDb]I׋L'-Z ;hɟü0dG%OADI=7/Yt!5 IJ[P=AKI!3-h[J#,XdWPE d*Q UڋD*GITOC )ߢҤHE,U;|.k 8~B԰o_Sٓ(Ggp!XpT!TE/߾]-w\B3Ξ&)Pٟ|ڬ^kU.FhVOQdJp\zO^ θ…ENCn\Ƒݶlj1EML%npcU qu.ph>b5^Jj!Rv."{#r/t=J7k3V87嶩nX5"f*?ZfvܭMe\\h?Fm7rʁ Bd^}PG3 A=\]>3},9͸@Rg)+T"7,B( pcU)A4r#="-\)(Yu*Z-zz*) N7`J M@tU?0S:Ӯǒ[?P*4}jFnNR³fRpLvQnVMS8v8pd`MtX`Z{HwVfpû#[K|wA1$íѺvZ1l쏚U6KEk=FKkK64'rBBKDUU%T,C(d&$˅Yotw;-nFbFE&腙Q ހǼYlFfb?3  B?mҌ l,L}{jznz ;iWQ`s4!?=QtvL"+ʹ}pXyf <%HiDiw,"(dfH,9AR~=pGLh2]hSbSf*[FedVDJLSZPɸ|FDGS@BҪs6lWz7ͯ'j^J*?" r^۔C{<籆sSWv}ZWyCe٩1EmTOyEKcguݺjﻼx|?p .GMS8s/(kb^vmx<{q#J^v.,gj{Ǒ_1er_ҙ=׍쇛E@IT1`ߏXvB$]$L)U*9`NjE]XL)N1dNXF8cP$:&dJgZa21X3 's!@e%IčM(S0&0H`LBgXH5!$%bɀv}NZ,@jh'm lX1*)dDL)SLJ4O9eP ̴MĄ_ QdM L֡7{a=^: .C ¨RC0-ͮZAPk"i"GSdgN 7Pqb"T6}ϙrЀZ⥻sD1y(9{ƸW0kQ40&$BTHᚳg؎D_'HJs#לFHux¾ rV/g;cCM R2j 1:[t`' JԖ3V?m=a^fQn;;YcS̷V9S\Z+G(Ѥ 8I}n@$Y&t/OQ5}0c2"):}.%O '?~ 0ɑg㵽wyLiwh̑l `4]nD1w'#prg`gM_B;n>.V6#cs؝d9፱ŀMYKe%7V2no7e7OmnWL񱆞od/rn(qEYw:cj3Go洢ߛcdRIJz\ۻڷk}oO}-yZf\a#CQ]p ^{(lȣ6%{GvCYcx!X c#+i`z̚!BF1Y1{&q~?d9V;YC,P^ܥjֶj),TmXV* *gl,KOY+J׽~\WLuߓ8]]eֶ<+|Xng6$P놑WvQş χKId oYlN˾} 56i]KYJ:LM+P.#e$䝋LUڍPQEt|QG2ai BH;Q/\RJu3j7_;h3"vjZ_9pn!$䝋L G/n/}yVVzl: {6P3j߯'5G|<[C2H@b ySiME]v{ }^cyM_eG^LRS`OgZqѱD]jQֺ~\b[-vW*XkIgE3cu7wmKtMAxf▙HND3C IKF'-f3HOi K4Aܡ5s]83c‚ KZ q[eu*IXp:p.܋ܡﴎɴ.3{%NG^>o=.ILٽAPXKdFy#iޜZ=Am*mXqVu7/-_{1~"lFŜ'za$z0#5D1NjK ZcL`D͍ٿlT\QTfA?K,wPMqg"a$-G6\[ŇTJf~IfKHO:BK%YYۮ>Psu +1 d*kP $uT{jZCզڗ7fLǺLu5;{.SZc{ڏ#!$S<ғ'b$N]q?Z~85`ӜDGj.l SJǣ9k׻9}`[gO%H)Vy>Kgfi"0jT^K$סս̀Azx2&VLSL8iًOcn~՜j[F svm}mF|o5J!@d"ȇ:؛5 1ߦp.{oZ-( ]K-JCd'w$?$1'ڃ^,3C{/`ҋ!`m^ NAl {64V&  HR&|=zJJ@YDJGF,,1B p6fG~5u>יa|'}%V,X[E9B2=F8űqE%Xۦ ŌQuYd3*HCs3&3)SE"k wk 7W$lr򷲤.C8J_.lN&`IG"bA w΃:p%T) $ 628)M'њ> s4%O '#)"Yzຸ[cq05t)LT*e;  6#!I(sx!! "^4]0$]w.AG ɀ w> \ i0ܹ v$J{+Uɮ;sIJrh,y A%$4F G:^xJJT09;}_}"nmTnsY[$EXđJ1aӄ$4k@ ՄA"YR;[AQp* 91X=Ro˃6a8}[[<Jrw^w f7}X,oќT}"Dɩ|fQ9בcp#Ǒ Ъ^w_RNkuun|u= ^%멾J5h|gg!s?֊/A([ IY,qRm Ƽg ҠC呺{Ysr7]Ă Z U/rzu e: nmi_QVR R{C .- n'2@ˤB!=^0\9|upwO#P^Om<o]5Ø7s@7OZqBB!s~9MiYwR]UVGmV1fLuiur? wZGdFWJ,lr7IIM x\:XaZuۋ@LX ]cy%կ4o꼈o7j,,y̿cAx?fsy vסnOJ]d\&jl57zlfqyU3a*aaY&~qNaѷHn5AܚDoW{p私J}"㫊 d.kX ;E HΫPQp=P6`_Q}f(lbعRg*`R4X+ӱhk:Ki$ٚ-080š[T$DHBZfUfj16Ktt|%4ko<:fC20d{We׸얨D3#L)HJai,PYPwx<{A-H!!]vrx CoyO]IabxoN[ZP7K )o7T(nVPlmD0^vqd] [{}.H:\ gf='/q/8Ȇh (-Hy4ܿb A[qswzzS –}6z7Ls`&:|Pwc25Sm!:iTobD0_[Oj]>lm),b>]̯Fvӫ">Jp)ߵϿE>5O'A#?5h*(~L\{05{> HM`lQQ&X Mpbţ '**EvΑ~bl(X@%/j8w:å-ןVL$BT4X I0I$mdzgb@Q5o'c@o7^6ϋlc!=\d 0X"Xvƣ)e̪13`l %,&3%lx3*hxD>9bA90Lt=fn@l%wbş9o  NRhE10p2qXJE$)ȄQP(`EL֬E LV\L2gaJ$ kAdAɟG>lef%diz՜#g$͜s&Y̆8$Ē/SO1K#!4F*R)"NRKhA+g!NV".iF P^ ưND8#0cP{d筓= ư,9^4pYT6e-8_ RJrFeOh u1&p|lgx{ҚllGT%3pJeuD7^ ԡl]>M"2\47.e8(N4E1USǐ`qmF BFU ;h 7u~KS3c4޵8ncٿbyM^.xN6dfAuqUylW6Kʮ*Y~$U ݥ.Ϲ|\ÔAAsRt}uZ$c Ej*ScAĠ'j>t*9zs7+]FsYiS:.IwfOvy'0gKQJE(YCܾ^J 1:=G. @qKAhW)-<($= e7F=C].3˱v%q[i]PvW!.Ow6zĊdW"]W"mުM*^^1+Utu%Rz3SWx[0y w5sqʭra2jWy93W ]%4>nV! h.hzOu[Gv 67}ztEio{ss_vSpL+C g Q(PBZ Q@\all=!(I nwi;S(σؿ*K}(6ÔY%QkNOOS]δy3%JB͑Rf,Q< Ϛ\esU6wap|'yu6&C=T9{6w3*fe:sdiic~f;WK 96l?].$J얡}XfeaUNKcV68l?V ';a[.7o3]12;u?-LV];Xnf;z;gc~8#NYgw?Zw|5=4TLѺRE%WX:>J4#fq9ۻr>H[81EA (N>]`Z)EQD 8ep (e*&J' Ҋ~`[=ݳZ5[ 9QSSVOV |қRLNl֕m4A 舴0mgJJj~3SJȳzPœFFX%!V)'RL˴*)0ӂa"A&Sr*>ch]wY&2PؐTKJ\)jiȌȵJfaӴFQ=FHq=5!vӞ^'JR<^dgN)mfƒS ssWc}%2vZ7 \xz9={Uuhwǽn[rx]W@ emvs^-PyMi5s~{&c4.Uar6>򍧓GaN rtĆOnt\ۘw(sX糥Q{o%qȅzfV SyooiEÜL@mv xr.^rՙ0qӢ0! pڴC0H /2ĪN,#qI+/E$o9?VpM@1*IIDAӜ$JTK>H4 D Ju"E'|(=1ݹh!'@ /aam&Yjf8!੠x!eUec-v GVNs uYx^LaT^;en5'gn 6NʙlaU8h7˭FԅVGg/Z+mrvlo~kѕD-jQ`{n`esr"U]ԍިX>܍;#mC ee8M>aRP579{x}%ObT8YZ2?=gfʿƋBwYeDuov^[7/K~|Cn+.ꉺOf]rmC޸Fޏ Ctubݮ7rVێn&=[]4ʧ~ƹ$(ыD~#(]q!A[t)otkC޸^S ({loIN&q=,wG.mx4j?xK|?﷟~6oxKʛl VOV3z8xg*?/n+^JVK>޸RP.*qE{ᥧ֭\ ԯV=Wz)퇗rj.Pp^*9UJwS{'Q d(jiJ/MN9"1Њs׆7~S!43!!\vſ֮%D{}ydֽ<9hD:Q[ԽVtC׽t" 5u/[Dk\>znS2!RJ/;'B!\~O m]l%͞ ȥX z_P-9:{}Ֆsɯd;=]nuH@ Ye/tc1Fv涩?ԎlC޸Fg/b1qoDͽ,bg-gtkC޸FW E7I@!b:n[8NewėxF&<䍻>ܑް߮)o76P]VvyӺ?Ș2Tp셗zZp ^z^=ϻRA[Q /KK-{)P/uϕ7}T祧֭vj޴RRZ+%{# 7K]f)eIfRc&R2Zc1HHR$9"K438;p5<'rȡlʅP_*휣ؗ|%b17N?L4>^3> U7o"Z1iwe5W̷C=IO~ fnxyX~kK{&4O^3|\jڭAN!FJkuyXYǷ̆.o)Jɶ3m9Vv+kY \6uC4^bfRР%wO-ЧeV4K<7~5K4v})9Syh KF"y8%^EF.kyɷg8;͆Z.7+OcՄH!Qwwq&xoa&e41WQe88q0,1~gkmjӬVy67%B_xD"mt/6.nVYAwoB=3 AV`]zi;MIv?WM7*GГK^(.x݌ cSPw˯_"7'&n*}#.葥`y&[P<M #z$-3 -3XX e[䡔!S(6)R%ֆT7jB;Si7dV!"ug!o s*BgeqX`nmNRvF0Ci:9 iP9`v*[$MJ]apF(覟 #(HgӍ!$uaڰ uӒbs8t"ckEƠ,ڟ"_4D`sl缾57/lZ*Cyd J۫)7A@HH!T%c g&![{ L2=7B4K~OnᤀeO?!UKS;w#>B@ "/nT̖CB /='`M4!`D͓/ˠN'ޕmd"`f.յ/ $׆d&A,uG- e*Z*nv/T:cIjY  ?g3ӔLPL$4SQ"t5sDcAwDtѬVq)z5I}WPvy5c̬b1Z&EKZK|N+*5.gh9j)nZJenK wU--=c-iQ!RXKE3R!H8-u=R-hYk)nZyf$*5CEKQK)tR }*5)'hii]j˴wU Auyk)tRaJ 4.6xf=_6]*cFik lF'NGf;\{I_dşܽM I TX@Bqc I d/%TKP/C.t\QiazOnDjkttSoarJ~ H@ky.V͗(bN- |*{+Un2Nၗ 4e ae\mƎ1m@%iN%c )g?Q`m9*%k&=nrQjm+Y J(K0g$ 4Jj*CeZѱb8(bîc@@oe۞N> z֏#)b%ZC*D0rl)vCf$ ɏd+13'fUFcBn130Cc$J43i,4lG{xL0@AAjl#wgD! :Y2*0[`x3E;Q.@ -! (D0`{/Y 1MPڮ cx`` )&jٍEzT3@5Hhfe,-g^^5#%_D_u8MtC{&Ժ}(.f R!8=c!{v1ͦo~kig+',5V[daiEy# g{{^e~ҥ-nL`~s}L;>-8k">$|DD~in'I5zt RPﮜUz|,C8 3Ҧ!dx"J^) `V[HlFC$ޭSqUN e3U~ͪ.z9Uڜ,GS=(fXnt=LAh6R՟&fmoWy@ tlNu-8{Aw _Uӗ-0}6}}\CO2KjF#"9'}]|N 7v9t\C^ ׀>DڒԔ[bB^poj,OD~d϶r-BQ e/,nG;O,V|ilU]˔H͒߮n?m"W#v6*YJ"ƍdE:[%߿(=ҵYL9}L]b<[:-Q_5!߸`3+Pi$h|1pQwԱn=8Ôuf[򍫨NmuEں1`|1pQwԱn=8wf# [hE]Ls0yU">?fϓ 1No:ڳ^c=:*ٻQ0}.xݜ;_tS0N۝[l {OA5DqV HqCO-%l܋\V~*\1:D]{(6D nvH(=UQ7$ѰH#]esE ثo4ܥݩzՊ= `-:-oZwbȘ{2+=c1}ea/1碻؊wBαӅW]klE.FC5]@m3l}wN>A(Bݹq;na:gcZE.sv tPxs˗':g՝D ? N`Ht.oMX0>$TE)T`KtN!a:g0I`ȂC fvo3r, Oef 2'%':Ll~q r)$b;}/zy#зNpxN=kZGL(I[M͠ԇ_I Тt*Jy3_V!!-˫Ut(iqu*JimEr>/9$]UtKj gC@SWj,d+WgleR\2,FxU}5|d޼6ZMfOYZ:R @ u{:bl o;HU^U)O|#.cmP 5R72ϗ_ʬ`JvBJR*ݛm 8lN+{$쨞L*٘[k[.̶gyBydϮN\]_)=M?%NGb OM~_6'zzO^ »!۪*yVI@^b׍m_LÙjMuv*XN>fӱ"Cv(oUv{ iFJHֽxO|i+g8 (d3mƙpeoZ/hZFf,N폭nF*Kb?^M'Jo "JubM&0?O8v¹νpNނ͟A+׳(Y"$"Mo^ m P"tg$ѯz9IX?`ufԎDƒkm#IEH`l$@$Hp]AF'hr)EjzL˲$7N$;e#Vt0#u8ѷƴٝDDwR(RvKNSwb{1me:'ω=hsxl NZj:e.=LԱ 'u8{Uv2Y:9o˔di/GzԑTZ/ݓ)@?cr"XVDNY'B0MeIĺodtJpt^6$a/+IwkZ/D7>&/:pCT ؽG3^JG>"@T! 4nv9Lc; lx;,>U{Vj\yA"E, *iG∠DɄGZ(a!FsH5#b/۟%7ϛѝ-`jIӵ[r+,1oU:S79.6*2 o3w}񻊘QL2Xaw)p;hZ-3Ґ86-P%!XR1!D4đb*XՔADB_-ɯT*3*l.3FbywQǔ,| ̦ ?{n dViDߡϿBYoe&,u'OƨO @,(({?_~~selr_OƐLfQi^>=z;@%(+WiJ+h1aFa=Ƙ1a) \ֳivlw{rc();f];F(D .8}nL0#4Ddf6Kk Oț!OY@}xhfEhfL'IT3z#S|S1_ft{N1M710~4}a HbDVKvf/ڨZ0OhKtϺ\;5_W!Q(A0 B$XaR  _ l^-o; 谤wUIs%UjzJ2G>ؚhWAyXW̖B i$CQHB. ۱U~Z_x@ 8P !dAJ>Lƪ%p5ֶu1;lsɨiy#a){W 1[ 8QMBR6 ;>\rU ,XGS/ɝyf꓋U:\^kTfld-X!G6ٍ2ym f{2lxw*[B eҨI\qWv$vPM‘yd!7>T0& sաK˕k$! QL$NtYP @@$E Z&2!28 88{>-h\OyF@ln,`.-(IDJ("HBcRXFq Di!$ye+0t`7K=އv3"ag$UhhBab^HX@!E!&=HY O1#r=) k^:I2oV3PuyH%*:i4R,0!HRIcT3ԆTAm'xCX* rTQc(Q13DZ1R AZD#H18Š0k(V XrEC3B8idq=U51 gZ:l/)Z%[V fw$yXrny@?Jgst$c4Z;O`Dy{:*W+ϸ\T"(7|1㕴ueJuf=_-)'0# bGOf':E29QprXC!s( n/ K᛺qHA+ r=ՉyxbSpٻq$.eUM9٩lRI~\ XlH=(@)LbKdnэէ)fi;Nr:YO(+mc3/9eA 1u|K ,Қ滻"]DpN}vL`L:53N»$21f͜ӹnu}Mbqc=HGs>{sl]/aʊmD_s_'"@7)MsOĤ ܶ&1 N.=n%ug:m8S?:c2=%"qk݅>b$;#γ$atCL9E^d*e+qx}vPk7N: Q(fu SX+YjSjw(BdIjt2bbO BY NC{.&eOwK 5!HÛtQEb\/56ᒟpm@*edU^vY {b{l[_?yjbWX{Adm}0??x(!2Zntk3c݄=sD0٬5\ eÀsVЈ%L6UJJS. Fh(A1הhF*V;)4{i BlҲG%vBk4H)"T"%F\)ۏI)ippfn;"r$$O)G# i`vGDJQ"GI_B%1|ΣRA|) G 9]S+/3"/5ۇuh0BI *lt3w %k|cґ˭!lHsey_}R+Uoh#lp Oa2Dn1rfo~&Lj^ ,6-l_v$;:N%HXBQI'cLu%oAӭ^mkIBܲ\IN8FopE^mVk=^jAvTCc &2թ,b'P8$6 41 #RsƱ< HE4RF4\P%IZ*.yn7K#=kKQ"J3eO{{Q[[]]@̺?aɋ=8s0%]LffٳA;-vQtZ4hDOhЛ^yo=S8]ugG8S-SM|{\J`Ʋ7oA{aIEV7+e]=h_-]M|1&])(*RRbd#H](08V:&FV 42j ڤYCObk&zu Aqzuiv!VV[b.a+eVer` 5P7a.qA(kԴ!vɫ''tjKQs75h*H) .n~m*yd LShUB!c Ԉq8 ٧'B\K5JUc ZTӯf>)m&S(7L,`H%6H(tvA̠F CR2 M,(Ԙ%NHNib9?ybrƟ`9sPގaOo1鍧n/?s=+Et&dky)ofY&۠& J-&%hbfi9țhR|k!(`HX0䬿87ź̕#?V9|9c]̸*,5yd*L]豵\v񍙡 hcm:AF`$Hբkqc٬Cr3hm89Y| b^Zܼ5s˾xNe~g/SkK{5DVLC=V|ﲱ0v\Ё DCePgN>vv6,ÇлyCx{>=v $Gnԩ݉'2D*μ KLi:tR*/$)U"hvBJjib *qrjZB m '(\I`H<]ژl hKĤ PHH6ڌZX4Q [%2QDQȔ'S 4 U0`.=1f*v(#)HC-wyl_TAk Ɣ8I-bKSPNsXI eC I`[$HQT[d24Vkkl5B'%cÎϙ|vs}]n{qMt/YGs ]= vsֿ^Z/\5!y(};5ٯ~~rGh=d,z>~۷t6_n9eߑN(kv毈q:\p̘H_.gaGC(h(Hm%ҿ-ʟ'Fvy2*z?toCeswzbDŽ'p;f?F$'v=Flk:9iM&/i޹z9X*@p6~2՝ء`V~zG0z.oMuY>|YgH@rTk7!H5#gou/syl g @H۰Blf]TeWfkXBF h|r j1`#_A|HuYVa j+kkSG/ YvRԆvȆk4A@L9L5APsR@*Iqq8&BC)ڵXa! . 3Ma@`$I}N!Q|pFnTֵ~2-_: ekOb1FVF!]`w`\&8|/ko p:]:G)NBk{gZou V <A6懰8ħ2ӊ8+ e S.ٸ'(-R ab !k*5%a5 x% lᣥj,;= ,r6$(o߼رDU JI W$YJhq4)jAZ00TU1*3Z}IqsԔ.dۙݾ K`&J2M:ڟvMm(jY%)21pv**)F~v\r7kdlfُ4=uSp ג}N_sK+bZgk[c* 0ڳb*ӏ4䅫hN1@N~~b,thB1QwTn]v5~Ӻա!/\Et 8Z7)XP |T'U[Q=-x֭ y*zNqyrI=q8vPf X{G~w*aUq@YS UT7MeW{b03gmա2kss?1y4E˿=']캰dQ, C X)v͍'[9ɗ.x^HI bD$|o7-ݾޮ*j6~yW;Z8(4pձn)%)ՇXH_튒4JPh9F7k~<^dSRʆlЧ3nhq,d]#fޭ˱Y)& w$϶#Ϫ+8vAFU?1ZQ2vNt%,HTR8E mV⍚엶<Gxh5_n9ґ%UtZPʚ4Dg;c\>9tFЧ!_|Q"qC<\bPn( q&EՈMYW\0jsZw )Ks 户bS*@ yKK^A=\߆ z(>J!d{qݫU!O ŖuvOf^v S@!mؖ_v#/EdM.~.Sf/[ %Eb 5;8I(8m;=Fw36'$!$2UݥaBQ*AIaʈmb$I@)en1`#|@% )lTqc*M Z31@H#ORbD# +M'A1d"۱Ȳx`I& "P&g;Z$+\e=99#qfˀt'h0Bp|}8!_3Os*2D"3Hse|"F3k`ߪR&ש?46ަ>F)>$j`J0q"8*EmJ}#`}j&_J&OQwK}(hއ E|hX"~|~3sT\<{OMVtq jkO>/R{%#zr@esۇ>Z7KP/ysɽ?Qk ~o1a\74ɿM2多^e9?=.(۳GlY[EO5@b2$3Ɍ.rjyj6FyEOn3%^Y9XQ~<7xC|IM `X$,peeEX]p@rG/p3 CLLNF~(|jR<9U`m Tۿ냽+;[o~ r=;h9[Qb&ɚTg8sͿ_׵d^l͟o]ye$r{}F^vӳɇˇO˿"|O&hKck Ss۟\ا'Ѱk~Ws_;uYs- ~E^xpPF[1U?]->,K5YITWqd.>l@Kf/ɗ Tk^)}\Z3OgR;6yL-S;j)% eŢ{Ğ*1שh2uZ=$(r{hMčF xG*IF_6t*3^{ڰoDlJ ?nR » *0&~Q.BIn]=[ hͦ";j)/eEd0wy~.=Q";_}<)Dɟ]c"(,eL9uto}nG e: hPT O>I1B_kkD*ozsbyu_ؙ/,3f jOUpOm ^> eHm+>ox&*A\s@?2\ol3Hљ[Kړc].".*rwlp5[;Uarq{tVvV.>Yogrm|OKm3/"da$MQLY<] `'ɾn"1vog=_M"rI;%5 WIt^RXZYr׊C$RqHzI$~]r -"WZⓕL:!uJI%"̂d'PeANv<:(*]H>E0JIz^JOJY)d JQ=S XT8+l镆aש\"+sˈ?t?:*H+$54w_M,,ICiEb|co@Ӻ&w޽f1; ? &^ R8wʲ!~bOWefO,lW΀VsvRS)0E-RIRJiCB&s.`VR2=nPV.3~l6_\cݲ ɧ3B4M}J]QV47SvI`[(tc7褕B(D:] THiGFH34ΜoyJ?eꛔ<{1w>ۍįoU bu3{y_cnZi\F g\EfAV/RJblvCTdZq4H,7\pqiXqUd43BUT 2z]^K)"]F?lnNKPke1yn3K TCXs0r:/RԐ Plwשޘ3޵]R=LH^Znѯ,Kۀ2ւ $+/@:-кs#7=&FJghWMf|wm$ mEĉkbk#a>eҌq(f$̈́TZ) h.u&7E>LrՔr6"CDI!o喬ᐷU}mޙ7fSXxܐ@¤ '$ 4Ro+=¼ f=G)ݭ/Τ&"9yE6aA°WӮ9Ywgb5OL/j, :4 :@m}W фä֕o'ҷn'@ӿh\Q KtéVUOCg^ N|帬# Y΍\@5x'r\G2^-r#ש^qTB bpSdTB!rxQ O䨄6,768t1W2!Fcz˽Nu8*,vU hM'MJZߣwKA~wouZJ:wKhnmX7n۔"h5S]/'#k9@5<-nd07lss3EMs-ђjIKletV*"'KAUF+==+oHT ͼS'C>26(j/txn—^[)C(w_JS,g+% aDXΎYTwmdڎYآ$Qv8ަJABTJ+Lp=۾]@ȎdS\Tk6ɻP>eUy+oPf7l&4 Hc:wesG'&p7 L T{$?T|T)\N;՜ȩfڋ;X^\y%+[a ^mu&!Nmˇmj~ӄP_6(;J;5jGd[:^M ::ŶԤ N[TLZWUkQЎzԊn2<ZS*Q9dLAb3!PpsUKa6 H3.ͅ(FY. G4/΀t E_(u,?+D$/* qa(.'9H`Sn,e8S\"lȖGʖϼ`ve՞ogǗv _ :_!ȩC~7@ GMJnV:q ^':kK: 5{0pW˧;v2ZjQA=Gyן\}ߧ哭ꌃ`FeWMJo֩<,ˌhP*|=p`/7 jRm}.\x>rZԎKm!=1H{AydBm6`kD4O/L(LΊBBQ>Up 2S-l6uTNo8yֆ3F^=PbB+~u豕[yↇr 2euyGKcHi-A+~=H6NdH)92FD1Vd7M:<drW[grP]oċ͍)BeD%2P8*35uBSpFb](QME@)%t&K3$Ew΋'{RxLw>(]_˧_]a.,!(V?;̲q i=Òj%ƲӾ>5.hE䶧o}2ɺbfs~2~[ 7@ Zk.6q~;Ra.K^"‘?2}l#ᒰvzkj~ٙdNjƼ[aR'/ث~1:(W '5fÄbSHE\+NàӝSewGeU@ E:NPwîkuhR!ƚ>FjweCL[Dv{%coȫwy[ҀDWZ n$Mޗ\7Q; j*zrml7 [MLJ}tA >rch"/2sv|Ov]/)?}?/b9?!/3yj3x@͔OŲfU#9D0Ef3wSL:nE_Φ%=[9sFaL?nL»kb|FP6*B00n|crm`#`;o|7SWyh!+'QG+.,']=U?;!JgԈ7Q  V\f( (t(ka<*5VRٻOSW@%y%)U#*ϱh#!3g \s^!DEsev6ḤMI[ OԲa$&)CnMTI$퇰$FO$E1Fzqxt3)Mwc.3~@vIpbC?5UJ3ieQKbIIQ6ɹ$w@fj|n&YQJS2c5k#闯iV/xiʮSFODOk4WҜՈ'G>God=)OkiԌ&d\*Yh/AS##,Bc"ڕU]dr$P-ϟ:֎M[xҖu3h Z 8Gz7Sn:N3hq[t馯w!g(Li3sǻi)xX@'!mU4l!OB1JTה~{(g}r =@vZ/- GJ< W+yj_ W *|) \/F}rp#`rW%Mr;[f+^ 7nE\}9O|P2GhQx75z{FExۂ_q/[*[2\@%z]=mr>p)O%9pϩpE Ih)59Ď~Ua_?*gl{|p 7u?+-NuI5 { m.>1/F48NlH O[B/(PkO(T0otJ+Q{ʔ:MRg( w5ic(ILI⪐ꜱ2Y&Jh.׋.bc؁J%)ƶ: $Y)LJoSDžn .:zAݮ[C5Q7mqLĮYLt,il#` t8Dj=ksxdcZ)g'_C^)l`!?X|d0OɀeVε}1&,3^NTcs$^\RIB2ў'np Qs@M{8g{2#3ݟef:qjW(X-Y^J^keO/O ""]yA-phEJYQU#sBd:-RbUf?B4B1O @B9L(eCU'.&Πzt6oC7'Ý,t2W &bۏ2Ǣ-YZ$)dYYSD"'YSï7gr{n%ٿevR[$?͋2y]~X?xikH\/v~cS `M/BmYEq[\oۂ4cuNq ׎~dz闛Ӈ&Ue_>TۗcvtHs>x?jdʲ$CG^r{LYEfbY3*BQ&<80>#Ļ{)n%ޭ 9D0U1qz7M:n#"MnGZ<[9s60Lsq!]c|] E !DyJjR_NN^1:R-R_7vQʕJrUZIMɈ/=m Cʗj9 )5ZR~(EPh(=&uSjx>mg\J6$NVRKFb^=u;%&R_7&껠PʵJPʧcR_7F5iT?*.'R!PZIDi=g|\@)zԨet(՞}㠑z T{I\I͙ĥRR;w]-׼y|myS+RU|u7X@7?%vsVK9ˤyX\i>SiɎteu?OϷó*K,nZ(oT"+/i{ꄯ6,3ϙURk.[AiUcJ&Rvk~SD~(ҡ'~\H|9mC)j髋Jn ioۇ {ր0]lENxwG_!%8#!d |W1-$ja;j4b{fHI\/ԜWWWwW kg9:H8ZhY)FB)T%gldU%P܌,nqCL7پ r.3,mzP;!l+O2|TT ”(Ŧ†451WHj0[y6LCIBh6th'g)g+KV@t|u wH=xO+}Gh =`ߦҌcv:'}`O~N45cܤL;CoM&f& =UEa@5J DRJ8B靨M0挙p֠I#5#!Ebğfp~|MD UgN~)ըWŅ?|5do_7w߮~N&d&B@ 8n 3(9ݼBjL& |HN|rӑF4qJF/bzx8QLi$L_!25B$No3/9#MxQsm7xx f"\kmc.Y6QsvS{e>c\3RZj^AvlںPiX34k\K5[lF3"&wPEbHԡyPFeR:swxhr6 gQc?OUI>9PQq/Qɸ,lPRWɳ@ݕ5RJxrhl(49Rz`AS>2zTEfƨ,Oȸ?Nn `g:>\U,^ޝ=&B>4$J?Q_8&NLe' `WS˸vyW:r/HB`kD&zz#TVLxFC py8_':^hp;^&Q9z+W0P'?٣$ TNN'qQO 'דz-Y9M~tøar`\jԛ8%c?\1Ru106 &80%{s2HbR6_qnPnb-ce+XYǦeo.8`g"I%N>qB;eO$X7z9'_'*;lܨEUeAz=2?2b;"|oB+[00"}ZLrڧ(WB&D'JG5 H yrIZW,R-P %ބ Qt3.pc@&.rޅ T_h}g#dޫp”m(^fBq0v_qCk^Yi1|<0"zϙh-R0sf$LY:7$MdO`ɩ"AۅuߴwA֍t"V0N!PB*24,OnY,_+ͽIZ99 k:Ta;uwwK]jNvA%08SG<1c3y\ib"?VȏE~lZd(LZHlPBERZGhb.XC(Y-F]LQbҗ2Mft83*5o#:PYXtQŨU+8x*ܽ"9=HKw&|[\-)_0E#=p⡛9zMz5^K3ZclCrS@~fy9*=ô9>B" Xz):\I.}a'?'=~zr\ݻi5"h!0+?sw@W<11:Za.P万Ck[GJUI_m%}~]e|e8foمѣkq3fdFPoUK]g!¬أ qT>p(~/-om<51jb)gD b9cljϱ\ާR?3?3ꇳOwVF[fV.|ד9sTtw[~WioK,|{/ ǍEj,}TZg4qϩC5Or6>yGRiꂝ,]TP4vj'4BL gX:=|w2#ĸٞmtWPKߘD HAG*-7[7HVK&f:"C.7 83"PF:9,HӖN wf7ڑq2W׿7\VΔk>tiƑxƱV! i2u3hOQ7U_)$"$=Lo=)&G;bPD%W '̇( J2xa$jR  (1z)(ht3++r,(Ggm X+N02 Tpp_+CJ(e#dD2v4*y,+Ql48Q0RǛP>ަtxjqKt" ʛ&gg CAÁ6陗 fh#c:Ǭ7gT;%ĉp^&m.$!T%-%I8\eQ6Pwb*[A͒ݮ;bi~)E-tPYPj* Sq$J}DH' I 4"1мj|~|MD Uծ~)ըWŅ?|5do*doWk)k9@?].]Ul@r^lO`0;nXpfsH4 -}fʄ$^F+CEUj[qNʨMJ`D:Ui.r 2LE[yQ%6(ʽ ߞ{\KȽH{P'Lj/{T"D@UKTeeQ10MDicB9/JN*ƍF;U)v8vD$wI$l~5*k@(93(̪ZI-<,I߇ iEHBIX{%j/IT|½^:{}m9زO`48̆i<:+&`h`{h(9yvO{Embp* !yJh9 P(`/l 4zpF[h1 ~+ 1.7y5nMе 3=鹙IOI !|I@pݓٌ^$5"jXfb/R0a(E0!,~Ĵ_ d*DoH@)<Ē* vNNuj#uB “B|k0 '#D|&ل(♛/4fmW ]||A/ێ@ zq||Rz8HM#زZnRギ;ۍ%?dn}dfw˒Oo'mO'ǖ~:1is?ۻyzU43G[U3q`gc|9;go>çXbtRkmRyƘJsb$Jw[}޶ZRPʹ7BW]QՄ #J~(p]A{F)JkG6JP(Ω1%>o[ QJd܈$ctmɘ#Js'Kii 8D6J9 @)?VjQz(RP$P$$;Q*<<hPZ[-4P dR~(E{R@)J?V@Qz(%RZ JIZ12J1~n1{g>o[M A= Q (&%[4\`\]=\_`;vE}[a;ziq7}hgb}yfbej{Ϭ'l[Wo{ w?~{y?K.\_ #5a=$+$?D#-gTH Å{6/?vH؋ԘAAEhͿތ ͞N9[Ǔm0\ړdj_^@.9^W ) \_u/(ZJbfAF97!Cy4>_x^IHBԁ+8:,6mOrmmK"aCjsMf5&uS80cM_~ HTf9c ~pO~dz1/Eĉ:IaǞ(KX~Ė>xEWvj& 攥\)uEn[T`%$qHy2LP:+ܖ=FJD0T/oEQOdd^*S 2V0%eTfrJVP`Eim!L v6v; Iތle١EHVOe5/{ܺ/|lJe>\޿ɿwݏ˟fD)# ITZ,deZX.UVċ\ũ4.Q<d\XOӍ2ɡ4=iE$a<}͵Bp-"gZ ^TZ0ۺ ,`sZ,)MBY> ʎ'ߜ~w聯2@j缁j' wˤ'x৻3~"m^] c$jP-58x e$;y:';}K;Yl'WIu. "xoP3'$07IVĮGߪ[R _p-͵X面˛\}|  \hfg~+^M~.߮7^'><ܷfOhYϑjW~nd×ſoiɟf7EKz`>2m~z(n6cWDkDm>5bۀ{Ts[7[FE8A7 8Gߗ.leħOqG;gzAAJ9Ɇma.P$_¤ӵ r{ ȇBi| >D5HVK&#EoawР^X2a̝ݭ ZU/\No~t&LJ uZ-Bmp[UaUrE~ֶm9~?nhō׶Uiڷ%U>7~x?{ ! ]ˋ'[ݏu$JhŬ[g-E WǿzÎ{_9i#P.6zB+h1Rd7͚r*#fX@'!m[Q4B^9D0յvB^nd#fX@'!mY.şn} C3,Id7Gy;p2"=>K]q ZRN*X%mLi,SY_DE#E.x`e);\ +ڨoq-(G2 CR9psYX\e OF)BMi$UfGRCC aIfH1P\ڔXmx* J2e]jDS53f$=#X#tN,j FE!.p"}D=8P1eM%/YZB92lw3YUfѲ ]:Z  sB"ELS@N=xz$3z(~ݤV甜|Z|7ju}εnSFSS6&aMNoId[-dd_Y{Hl P:@{7lmF3~|k~V`/ QQv8-2L&23Vl%\j"]8SlbW(H:?F/unZ2 @{M" 6t A\(4nS>^$i Ni_{Шv% gѝMTĆk<ք&2bLiҷg\LhҷkHŎܶ:K{5\P^:jqXzVK} ;_B>g3>.9eZ+y&!kbPpAEe۹$U kgܥ ٦7P-5H3,ZZpݓٌ^="] X.{ M=-BC!g^ @㉜v-X12-ؑ'6ꟺ RYmcm-ڨʻNnocffOքk۷(Nc/r/g.3=VRv9~`| u2弣_,DwO$mɺf R^>IFxdXO)>!)%&?Ne7,Fn>N7Bۈ.şn} C4 S&_qg7T8 Hx)F~#HdzYRE)|E{@+hgL)@[u%pS:"| e$OOG?~I/yzʞ«?-A6٪TgYq lTm,^ljJ-] /ܿFD>ŷ\⹙H"aJC}5|.8|>CAm ?L׍GU:JL*Ly^ض[}/8PqЌeLo梹4 b\K;?$].GM͚10lPJ85x?`CJ#0z}}QlFO"j& Î9A DK5 {]a~ ɆD:aԳR<ޱ` C q՞ړfGYhf(AΏdnè˨z@^g,Yk;[Dzwnyaq/\bA`z먛>?mu9 ; ˻zZʎ6jn|.ڜiGwIR7s9,p;/ dFu<!IJ( &c"yX]]]`FOv^ܺrEǻj#-ۄYm[Jv\no1cdwR(yq$z6ھnui;׼,\4_=|eW5*~PjU)!W*QLao2c>F>Vtݸz|6Yu{03#ΌF%"a Ę($khCp0Rii(9huT<s<:N"y,? fDvs&^bo^Lcrlpjc!Sj+?3 l| m,y*նBm2EwP 2*NhC  +, HS5_H(A ZV8ƂjeLaDXJD"F nҡVUn`mzv:wqR tbIڵ4VƎ_(F@0m;tw\RL8ַl?ɝ&o[\e!&*8-%Yyv0$v.t o0%vabK@wHIuާY;ek&L{}YKv]8? HVs'/,ʑÏk9L3JC*Qp 'Y8pH$pH.Rd?B0ҍ>lųÖG&`.~ g~ȕƑ"O ^~z'[ pW]bkۃQU÷ceWQCvݚ>G*5k0nBUt!qwf NVp9[l&&OK,6!E+[S)Ʌ-S=9u1; J`Lp5QB`)j{!4"$@ HWު?*?OBMa$5D0L%*kLcJP q! a,)uw:T[_f0N¯<{Ɲe[gz:q Q?'ys&g4$ci'YK1w2Q4&d ~'$y^jz<<ڭ= ̮NjFVZ+ O#2&_uW42#WJxL?MU"Td<*65qI*cbScrtm,gJnwx<̳B-9GϓtSMf/7~OS4'y.4.?nj#ZxDv8Ǔd7/77| ńBWsOj UjTe/_T*ꍍ/}Yj F W}H?9B!YzL@ ̕R@~,d&B`) ?RS ҋf)~,e- 'd1R(!\YzY,w]fñeW^6Kc)YeŕRՓWOłeCoK Af)`?.YKS)He1?2Mσ4$W^0KuC$e+K/2K ci*ҋf)~,Pt,Լve%yRR,,X<}i*5K/ !Szc)bg?!}Yj,Ş,uEv"QK C˔lMr ) c 5`-W#%x u&RapһX(_Ew;sczUGS@%3m<s7 =v Zl}, SW U qWH\Fi|plUǁ yX] FVgH:1 H1n AbZ"L"'Ča LSPQl44D 8H\L6@}tv7 lh"mӍ$HHjX%Hr2'"B?`т#UvLu_GK<raoz_u7VWĄ/wi}Og%1bl 5(iCSE$Q,#i`\(!1fFJDHϒk96Vdmӈ ?jCEr/mT4BI1I܎aښBQNBL1ȎT֌ZaCZ% Ң9}Z#K>NCѯsW 5w 1Z$/yQ-%~+ˇquwtlR7,_ *~}gn"<@$(?:b#KZiwQ YB+ݟľPh.XI$"aS 7{yzrCl~|̅\P7]wM jQ&oj; uy%mݧRp+B' SZ` V_WҁcRz+=.ػ Z`Ug В}k _T{M jxj7:);r%'r#@M3ȭduϕ c" րp_@z]=niT]k]6WFcŇ:m}[Z *Er_Nۻmfvu(`842nb[wF27[h=WX y^N ED{rwf9ǃm5_wNfb ^t 0o`cj6T ذ파j6$fחZ2IwH-,CV+kfƮ]+1Afƹef<7~GjUNbgD{Lg~yD~bv:ng5nn?qg2Փ榋4oV wvjt۾s7E+;%Z,ƫJF`'tl͎xot͓uCNi3C,@mĐ?8Eq\GFeJޭ |Up/¨7%[]ÐVE>w+$]XT Rh >p5ϭ9,SF.} g~FߺYCSޠg$/Cx9~md"v=٪LvͼX| >'mu}s)?08}~8鳶VvnX3>gmK:zFw2&׃8g ټ ЪPRDvfSO3O *^J>-RKÕPW\ UK fb-^kkzLjֻ] ɄGW*;ƵIwɬ'n?h=TQٖx~HTYvuSe:#g WO>Sk  G'C6h+ *ptD%s־V{\c_{ɏ2Yh?6);ZNQNM `i7z ~N#[`I;鯘'oϗݽEF6;+ v(zX@Jqru~I}_NٵeRɡ.6D3uhB(&qd˜ĆKa}(0#0llVH;xD9ф$%7 e5Q)8mL2#!Be+o`#>,m(Y= iY}-}\/@]HH^Wa$CCCxP(ѪxN,hOXt3sdZY}E(=mKp oOU t5ħJPQn#O̫[̟+Yf*+PyTHe]+!H JQLݠeUv 2}D(7m,؎p$>.bN>L:P?@(]Z;g;KiǧC}2-hWa?x} B.X;]Ƹ"1;(ۧu5}(/J%Sɋ-?%_kS^ _^Boo:^M)QNbBI&c6*d "Q.ܭ# C,%8HpodU!$%uXo`} 3 T- Irp=Q0;֩ WO`|[' |?@cϔ̷z+v b_P~9YR)ƘP+fyZ q!I}_'<R5J¡qJn{Ăc&I\h/_?LE#p#uk[ cs)kϋ,R/S gq귭:+Az_WS35w3U)J=Iݭߏ݉, iɲ_:vA@̤/mM5UjS=V!Fcr* $!9bW _.%#18L_Gﺗ=䉇 *0IxwoY]`wÜ&r= k^s0 d '3Yҙ.>tJ $BM9g Rv8g9ɩ4^$@Tus0"QJE]cunμNtKSwE0~I6 gW_^^OM-DhAl:"#v]g{>Ä$Î~{ y}`6ʌ֫rN B.#-Pp˸?=ߞS-'=~Ѓ&zWEsi*5Ey=R4P"yssKjbJ#[;SQ랏j f#7+ܹ> jμ|a#ZW`8lC6L7뷦Zb:{GZx zRIezx?O^bK蝴5RUc&V&d1DDpGSis?X=jXZnoyh_Tw K7/ <6zcn0ۏ'拾c<_]U0buhS4{lSj.s b}:[@݅orHB޸)'Sh7{n}1HchY_jn&EL5ysUiTSn}1Hch}"@M㒨݆7.DB;}LQ6Jx]w/q30pjbw:vp3s,~THlBȴNR)/KJӤ3\Tg\ ddx^ϘBIJ PY>OقwK^$2Yx dcFx6T@bbfrõoUqϢ4>bۻ 3zs1ҳh tr& FwvFaV ڒ¸js) F= ]PI,0305(Ĝ5!B3qj}ߢ C#E ++Fq娝^F_9oS+qպ> r$un= 0J0vb55^"C޺Eۧj|4uL;!׬,SuSL(Y*.JQ rxmH䛫ARN~-:{Hxo(@\\*V4;iP0FHX5(_VYmjxY+z8X4+9S[tK%9Qs!gBQ_0Z",c&zO|.˂or.ZTbb"2#_:dLZjBB% $w̖5Jbe|ktmk-lUD0$0Z+MAH{cC OQ0wmR"t񰎅$ERF TPOR  alyAf$ϊz/dwK%cowloxxP@%z;#tjȡ\2Il&iH*\HSo0G[?m]oh{<܆ŇLX9t@U8?J~D qve ǭy$I$W3´c# Bu;Rd!41"WX-VK&%Hw;p8cUT*u @i#'1u],@R0i5PG53I 3&)[&|Sȵ&%]P0$JBJ!92Ӳ|lkG4,XY]Θl<̣*`=bK>ڃ\J=~SILaPyM:{%ft$s _cU2DaScN@@ ̆<&8g9bZۇ:4n#E4;O/jO.VO0_1l.n̮ߟ1$?gج#1XMu34"kY`^& tD]οfz'E'p~s|XG'1Ahë_jX8f/!Y~>?p|՜1O,[ MqM{ɷWWݱC'?\>\|ad~B6]skmsx QR):y.v뫓k- }R`!ݧ.o (}w lhbcm\F_] -ģ;hRBOC禠P&OA+2:ÖPH}>(lL[|kų]JCub$1nՍC!eQcS-b=rg}!fX{:-yX1tד1k.ju{S󔣕l'STCjd|w~o?7-.[$}΀6+1"6mEbvcDA!Ҿ,I+_̀4mLZj=T Z3^RA}mTDK^ԗ} 'ʘԬu73ggF`+YT=՚M%Fgk@iېrFL*AN"?/)᩾cDP9D=ZABB޸)avVϗ -qy1 #-s_B޸mo hϯh=jN1rhjeEtYK) ̀D%ʟ aCx@IhPIյ ނa*uYw.R9Eea°0x-*UI5J`XIOi01è LYf`+3`ď ><:+{o|!Z]9&sKAST;MsdSc. ڣ~v}~@bo)JhAo$"xQ[`PgZۑ_]dzuc7ӫ[ƦlE݉)0`@slNݾ0G֗Le>IF3Cq+x7b(vѨ`ۿ{s.JK9b'1:BV'V>Yg^[Zx)m'Dp);KӃ>>d?~Ql1kr>ǃYN*_=~/O#l˫5 >J9Jy?#*9(~`N1d)龆SAGdH+ŷTqAeOy/}HwPC}( G!w/Fn|0?Kn'Oko랢7MO^ZpXȟ:TU˅^WHEL = (5+B})6=ߴy?m`}yd8ULxq+RKÛPRi㏑T֠<d;X)[oMFxZpo] xdt!}'֢ F`7X*g. CJ5(!dERO|`k$K7!A6IebkSzMASAF1K&(#GhxRܕW&O7 {KRl'J}2;(Ȕ#aco_OG} |(rHô6q>|dgx;x3 r+x7"AvШR }f^ɡ#7CzguӸ_KORtK?_|Y!^ZďcqU?Mm(gF06W3jԂpűp_/CMz~?p/rUקo~/JXV~vcHlYcBrؾ(yVHsxshm>n!gatR:፦'OQzh"̙jGyi'>}v{Cy5 VY':ƣHa|\#^> *kGANV衏nWolh(@" :qx%ݍN.^7*CoRrxž;]~sR΄l1M063guf"(eZ I,)З<!ꉠ@ШQCd+tVYq"͖-m!je,g>I& 9ϼ*9cA)/|r{}+ WmLݴC.Xdjof(*FLĤ׆9/)FI+}j/!j$1Z-}-&{ߙ_{zbyf8sD]# ܴS}%e|X<_(j\3>Cd(,(A_h-E!OD )EER0rN) 0"u-Wy!kYP:s|8u>41!C|H x Sdj@ N`\bqhB<3mǤҜӦf_;5D0[%ebx]"h>}76#WoS"u~WBܗifK)-mX>osv!gZVJ!U*eYXDM~_9IkЁ@,2.e$]:L%idRx%&է1{ 76lCoO>\E##^S+=aS.%pۯmWXoFkT?"%o)h'`p7ł~q2zX :˜,pW}:A8x\p AP==hE#Z1Rs]~^Kb˜yF$/a c˸_'jD)&^-F SH6K1=˾W84?f(o0FCr"ELmӔL'%*pFF]A;W#/E }J \ARF͔HJN?4:8XɲIUPV10%t=ΆSKوPR6#8{0SѪD`LU eABr䍔L˂3Ict\08lT3@1wMuEc%R9SBb bKOq@{T#"5\r~*s 2fV2MgV-4sh:?}P_\shz6nVkY}oR/$] Hup M!Xf7omL9Cڻ/z[ Uq0iXIAu~̻iȅ?T"Q`|T_\oi=T\BDT r Q)NG4((z'\(0>z/.)8 ^wӖ)HqMɱx,7c@J,<4Ӈ_|vu^unr ٮ"A_@'XȖd;v8o`\/ƝdRO ƪ+Zmf+Jt%@ox{5Fi1c/W}o5=Sa_ܾ+*(< (*ea$ʛT]_kc|FRrOYTRuP]^9RL,:82eqxͽJ9UFYz{'וո/Q B luuӀ^ȲdDH45q7˴S}%# -t8˛7QPBJP4ЖȒ.ybimLT 4 *wd S!YYDC2!MW6R)V3QBCXHFX,X 3nE}^-wGu>r3+6J.3+O3BaxAĐVn3: KAD&W:, P7U#Dgq1?—]¡=5=B\2ɫaX)%rNzz8WʺN!ܡ/Q1a1+ `yKȮDnM!pCgF=UsoXnD^`wzjrqh|\ȆsfT(0c0UW5Aa5^'0M?|{=_Z+x`:7=K7VA ,W0z}?0kݥ*Bρ wem$Iz:<"0Ec yJ(")[`FPRm5[*EfVkUw>+%/8CUimV0ز5gSe-a͌j z1˕XU^!Ԝ x,)Z% XS@I[/4a/Ywad%wxm|8Ej6DHNTC2;aWLxNwq:amKQpJExKs5Ԯ|kuuYXrֻ7of[:TRQ:䲭LcFRBGA]qPT*/ xeҘ'Y+8Rd j4\-) HlSr˷#mkġN]^ vrޯ"&ᣮ曬".T#ChƏlAod/Hsxɣ1Xv٤V T6'}\.[# y{W\h. kLJѾ4h+ A+_t(g)/ZMPoVW{3|;ՒГHlfǠjHmSiO 3+*Z.V |RXUWK#d.Bqu{n:O{IAדMP`֭R A:ո- d򎸼wh$$9\U;qY4.;g@tC]𹾻 +A^%Xr3^i/hҷ -ha(&zMnhN] +jB'I4Q:}PF+{0g}(ar@.Z:|0Xv%މp{+'04lZs+Vn`^>/H9>RwgĴ,nh5єNp_?+)GsPQ 5CĒIY<|w:)%bUZQ:SrC>X8ڻoS4m >Ҁ^lD\ ׋VM2zd"NC)hܑ#kTi$G$/k1d F Z(;%ORdZ:-utwǫА3բh-+ġkG1dC&7@JkW`b@kz=:N\K33`b\휌qRUn Vn/|ov+gx#u.&" UJFrzT]˕JN!\{#E*qՊ]#dƅƫێ9/{ $. P";1[mX7cT8`A̦W YTȘ!Øơp\7`1ri9y! .~A4TIk$sO ٩( |aGO~ux(wI#;MogX 3 dYۢ 冞x-}'-ˠwVC.$,fMqS?$;rCk֋P7=xH[#I;j \A"̺𼰡""FʛP 9`K[л=I?[\7)NOPb`%[n[v;xJCHWmW$~I!oG̯Kfe-pS$MH E m gǍ|$N;7p\:3y|[jL%"C+Rs# À]|6o.M\%u)WQ.BA,kXpo禗1#=Od \,a^u"@1T![qUqwݻ)fB}|ۭ‡<#0RodYw5G뎘01ِƾVEcROE])<'9ury~#v~YͫzEmZ+4%,N!o=ѫq\>)%=?7:"h&1($Dvr$%s5D,z>taȏ;>n-B \Sigs,NwnTaʦbow l0޺a4R8|տu;pK\ќkF`h\}J fB'Zyk:rv⬰[*~CyEhSmk0 { b4[Q>TG=fV8i M6W[뚵Z,4/'w),DCh`&ۮHNu7QQf0 PS4w8ς hB~}'\Z6ʙu "7BQ'w XTj}?d}}c>jnyhIg; 2]O'R6 U)AcMzL7/iC[u?>~S=r1Y;=9'n ƒJ T-h%с߸׽h1oOdBk2P8]ҕkWE mRĻ1MVĩxĻfYS+$hl8"EPwͻ 6Ųn SAxH ʗSS%4;T$;hԺ1WEGllB?&<0L+u4^t6$}ަUY /k}ᩀZ][uD=v$JЁQ19*@srkOf/[$jo/v>cDXNbrqV2HO1D+we> uK5"~+T;]ͮX%'tU"2;y<:,5sΎ+AK͋Д_'}/)`ڼ h qË鹃x6ئ}mmڡi' ON_29 Yr~5?߃<9>J~$_P|D@{@2r#a/Y\V绱y!__d|tA_?~54y#q^LOސE_)ZK9aӌVh[ߒTZwb84~:9 hF5n~Im:t 5XImp5޹bcC'ŶFvb~ڍL4mb7>>'VK\"Xmjݡ r>(TYzI?]K.*'{`B'?8u9.p`%d}i`PbH?٤[}am5Zi.8hM&mHsk?7Rmq">SyD( 񕈑" +erD1h_eo.~2p\gUA;H5+J&jjBJ\]rEOEC>D>V+T*9?h։VNjYJĈ")ᕈؠ7N@~R$Bp'mBqۦϻ΅(,A,AIZgWq :YM= /:[V+7_C'Z=R@!W˾Xׄ |yqq=<Fu"hD:d%kHx@Uyw? Z^BU}:837(5=?{ܶutvd_n:5ĝ$mSim & 8C -% q"a07`f,?'ybaDpzL+&7h)KD5 J$K#UB.P^6O`NG[o~ؾ/ GlDH @ >;x4`ع/ (-iKɩ1]\  IOd "PT@ϟT{-@͍}*W63:DF&(8#RCB$J|CavGRN; CNI3K<%!Jp\]߉n!ͣ٧M}yreb"I.݁" 8spME¶fTfm f2b*&w*3W&߇ 3I1'W, 1 f}0۸]8sBy35󎫻l~)ٙQ>pU /4WgC^;>K.x_$dлK7tA{,=S)3ܻqݳ[u]PYO< Y0p2.-s'wwr,mxhe6rfoSmGOw>MG$#zv_(ً͆,b| 7ܣ!1^M~۽ˊ"(\~)m+TQ@xX_T 1@?gTb<=8t!A +EL5*JOJkY}SAt72  VzV"|).@TJwQ=.SMbg[}c`/t266NJOQj=Y犟RM0フDž)Fkg?\^+7{ȘNùMF-p|\B0(u*lg5ĩҼc?Y5Y]QqwdnI`vH޼,Gb11ˮ)ߠB*{,u/Q_—ҧ p !ab}8RPb60PO ~)c"z,}Q xRG4qA KqQYgFdF͔Xk0fU #fw'naI[Mؿ4Z)KbWxsv~,ňF5N.L ]%-(]Ąkv/+n'ɽW掾vr<~wM3|ݝK 0\;14 @ҹҳPnj:ݺ5z}_-xuEM=`6O}pQ>؊AGM08TGZ֠DjƊp[ X VP 6a A` Qϰ05RT◸$/6j&dqsװk0~7~<]ȆZ1 e <Œh}MfrqH&.d [ ^l8"$1R1#RHc-P$*6ꡀa*@l!ZBݕUWE3 6m+WGzJ>*}xLK,ߒ?~?Fˇ z7s~C8Rc@A_F~}s6J&2WӻUZuqfA(Lof$YMYףP&y C(F_][>qC)v5בi!x $xMҭjSq"UizI! ۄk.$gK`{GoByǛq9j 5c}<"uP:`+oϡo"Z.GwUלF!ԯ{cU ³ѝf2W(wjUtGuR[6QIZvIA#'\XwT 1d jpnE~PVлOhIZ#_]EqgN *+Ic\>;g, M/j: rx8@CJyzk\ ,RK 8Sq,}2c{2RvtHTTۅC E']Ԉ Ύkq8H,.ӲZTVMS$RR҂DD,hI",4 6=..x\MТ!'I%8fTcO^#6@G˨٩RxkKze'xkERQ1,#P (0;pG@yQa2;l~aXs!  [' 6y ?I: $[n!O2%!Ld_5B{(F`% E$bqDh*ABܐFD rG`>oUQNZzbE)! hx Ej fCL HQK0DR%\l"( "="A$aZ T@ vs9nzsy FmiB\k^OMLZp$}pO+] m U`DC{ ~QDl>tp"lRusq\BsX`4 :ܭe1yѾ_ܞ{ V_|ytqsFwd4QMvƴOځlh@CǓC~1ڇǔ9Ҟ^޵( C*Ue[2&kU 5rԪPX8f5#<#BlFNwj+lS:q7=mczq0{?-pyF`Ah/!f VzVKD9RwHser?9*_.4%(dn,tmU#zAxݎGf:*Ԯ=*/BVN;#q8`Ƨs^m:,@ytvN} [z2?lj@!pJi@p.aڕY+?6N|ްW&eUNEnMhX9ΎW(~V_7HT/A]K(A[%rW;@U370_`F:[UUHt)৪xK?ɞ ht_I)ԙ(I<ȐPWGq=qc BCa`Do"Xf&i"TdJ&|Ha3FK6RX(x7Y,$p2i@A$9ނ_y8кu!|'>JyPY@<=3t2tfhJ$hl^_kv.-ݵ⬑Լ 6l9DOt*zp`sԐ Yz&iA ,x( ;_$?6gK@%{(d @Ǝ՗¶䮚x ;BM{d( uҘ ωh8~g)Տ^ꈙpܚ;<^h)|=9CmukAӈo ˙o+g~zbG3WҖ$w q{CBd;f2{.Y_i1+P׹45#+2}GyUjH'aPBNքøyYka*ɐC{,NPM %˵9Ap` }é&(v RO׊kgzّ_1e-E/L^6@$eyih,I3)dVZҌj7FHf) :j ֝d *-fl~S;MWɵjy}Oɧ|JY6N:)S( j jy-J3\y&7C\S<'T!ЅBB h\k+ŭָ?yDģjFucfih$C<~MjAyӻ.2\҈e\YNVs}z?luKsZdMT_ j_aNZIKW (ɜ%g£e~)CU2tIK mhQEkBhJ^i^Ee Y)4s8TiBLřPu?oRmee_[UylwS=(T2K>8J@3!\jUP\h1bU1xsĺ;7 ;@no튞iQ[Dg3o숁!Xd'%BL9U m rH[W"=\K~5[V Bijpv(ŝ~ x3[gO+f47($UX^ +ĽL#$yYfHԧQӝ~5K{ƸLMP#OMY$[ڰSw!c8'šst&zĤRTYstUz*FFx ZeBpyJ+JtAuRq 3L/ w lŬmD!E, PV$ydJVL8c@#R+\TEÆ[BD݌xEЯ8'mЦ@bYbwA !r0zc̳ 8#~ϛTkψE:'tJ[Igi`Qyj ޭ4'pne#s߾ jJeU*l㰇B gd>B80hbE⸫nQ5bϜ '/_l@rqRZ~fhoѷTt49C|pϭuϹ:A{fx?۪siSڽf\oڐFˋ] -'tl$F/.5siXڠu!GJcQ[!CFt  z2܅`Ho+kk^ުe RsETu-R5"mY Y)R8VB,K*˘0ƓԖ 37o5?oQy?zL<@ #6?L7d؈R 4#.&ɠ0B*[3=q$vΌqۈ04nCP0T#)CY K+^2TP%xN.ce|p”&oє* ~e6aWyNZ]lc k#lA r&c[rcn0NċT~ɚ@mbWnqFkYo)jI42JJ BKjUKm-ml+ 4VR :6A=BZ@=Z"[pz66 1=V>"5c]N緇whT_[GOH9K}7louuu,~Zu߹IY1 Wp5Zۥ &rt!7X %=(dnor-Fؔiů!o^R>q ڈFSv'ɺYYw[vPD+1&VG B}&>+e~dFÏ\‹ԵN['sy?8@2EgǧeHQɓIR$)yT ~~;+BȔ7P)-t)A0q.2PB*mGcK( WI.?PWުۻ"x:ǫ<,$?-)hdXc^(y훢[!(Ҁ pUa}E>2'rF@ln"a\%O<_u@/yoIs"pGպSՇZ%\]EauY k8/ێZq$GvRjDGŹuU7ù#ﬢF' v d8le|5V(Z Á_4^Wn2z{ҨưwנCW[:[--YLSs|҉9Gsb73-o| 8Vepn:vgu#|$UέIશRtĹ~Me8Ӳb){&[i5sŹW! \Y ,(XdaSZ_w8_L=cD(]y.t25pT-mm'Źol8r[ZU#L|8sOdgs{T$)y {-mVrްb33}}Ga=e}:(P}mىH|i9@JEk;'K|4΅<-D s{`~Su^,~~[>ǝޯO!L'#&-mͼS0X|Nnyu"r|(ZɎasΐ(fkws} xXy:zx_n]=a_t9czyE+|\Sҩo,Us傗Ηyӗ)NQ)Yciq5dJ~yW I,8,W=!h'(p!] g"q]eC!$}t49LsfOO޵h/r,g҆I\J@ Is2*rlIlY*ќ|٤mEPZf!O# 1HWks3k}8Y2g'* ڹR"G[[)L&C𠠺M.T;F# |igHL LjgqM'Lj g:d(luO/=tm׋[lp X_U醥S뢟8_.bzf4$Rg-;c(J2BR1R`L̚Wa+3jڦq wK;xd,n)YTMK)gSZJ#'XCxTWC[19+ٵ_C[#h/MZFS΋[@vr \1IFTP qiۇ;^;t*N/6O#+vFV81L37 ~n0kmCm6vS [ mc2jVcb?Q8NϏN΋<>&zK-FR mI)JbYʾvWl}=o\hOhN&X !0-L3g'Cg -\V.y+ɲ&A5kxy*` &s _ ;-Vb*1䔐9'GNL1 [/+&ώT kVc*d ~ȟӌcfZg.M"fzdȳST 2(燾z-à^]UZ2%a2˷y4*ǹ3;~ޫN]@Gdބ+_s uL O6UJVpNP<^O*: ! TSqy3uwDuSjRJcsU=N=Om 7fGB (hGdXI@a٨=lYo>xT\03x!)Jv)># ͖wѺŽ1lXunAmbM~,Ɵ:W@=E窕R-7mu%b4^u"szj?. f&xϻn7 6ujYC 5H-veIznR>V}iFGj w(`ְL!g &֑ɶ#]P̛Ze t#]dЉ.#2~ifĺ6J׊ T URK5b}fX6렩N*B /B^Tr՚}>~SeB.jSVNQe8K4H&0R;Y.iND)2kΏTDvӈvrtJ,@PWށCI0?~Hlکl6io7{r:*ib*dطjbS[''F$ȯ>fc qSʂ2-\K\;oޚ mk>:¯iٽ\#z?P墰X+Q^9ZX O%s1Ai* ͨRVw<.t& \NwDMn)%kN hxq߂ M5Ţ9 l:k<|(pF6HᮦȌ6F @vXd.V-֧k3yS`u=A6ߐfb.DN ArhoyKU;Z?fۖMv;DyZK\o+ךEAŊu0?j%#`Ԓk6YynJ:r݊j&P|yѷ36H05uR QO<(c} *w, *f*3R4<7zxK};*hqS-/M6T ̎6\.D%xl?8Y8> MQ3Ur*Q%;휳??U%YnLݏX[Yl}[]I={@Bfs5G:lϸl2Ⱥכ'țvҞV~]H89(XoJE:h`1;0&~!to hʵDJmһgaL%vv~1glvʲT-?lb9!'$F̿^ 8ok{X.0Q NޓT'Kɡ{pR$yA6󉡶j% {RHmZ+5 Doa4S!R6g59)$0HWIʹvʪOxk}=Q ={ѨOǮb^ km*p<[|N~r/iv\3a.Y*͒Su*@Ά!rAWʦlqnɞsQo*$~eIG\̴n]-a^jKղ>m̾jʄYu 6ӛгb5}7ĈWdw28[Ul:*۬{gPY{C3Rn%dpl 4*$BvpSG y\C.٩Q-z )deUKԊqa*s z<be;k0:؍X&͔^a9ͲhDv,hHO@4kLz5ۈJWl[@3OV 9+ g<w9Ի ~_**3Z-O :ŢX[\pS>M%;O8|l?`$/#bHԦeSXѓu42I;L1ע>t]uxߪ4s$#g%xz%/X]d2;88Orjjs'E>T|cGdeK:2G$W.1owO8@|]/}29䉐r'b `s)H}qqRr&FE̦,gd82SaSk~M+T+pWw n5U&ݕɸܳ}2!X:?zk=>#& jjn:tz8l}^t {Rv5RK kUK:z;cF[b$;aOW1ͦ}JΒtuw}FZ7RH:FWe Aњ~ƌYD0I#Ff N}{|XؔZodכ9 !w.Lx9j/|~漷ar-y>|FC7+D\Q.YUS|.V״ie{^%-VR.>Ћ$9O}̀{B'D38b#i _w캨G^Upg\hY ʚuh}CE8. o6x૛becp{WƱ˾ W!@ 76qac}t(R!VIiLPJ ģUutuW K 9M@6%4< lmt~EC'HHA[[; O}Eh$J/:Xg#c4044PCkѩv[zgJ3r3ܰΠk)olpf]+ p֟T/m7gD<Vl5ArAq8j4$2 (=(ݛ0w1Ԇ">ŀF)bTT؅D!rKɶ:rź?q减Ӡ'yUJra]4YZ#V^n@ xVm105KBD*؍s&ڔ;cp#hն%(k ˆ[s]+ڷJ02<Ƴ^mFh RxR̝sΣaʫe&*G/f_6 hi ay1SFf:0\C;e$[uM`q:XP1 s[[1NA)hd&oiIaN1'-m5OWKtDWZbVbD AB˚q=<^QA@^!6D {]dME{.s NqI`lb)Xnb+& vU[V/ޚR4c]S:v)jtJ|%pVȁ$6_2*/?(ы^`-eΐ]X7W .s^NΣ+aP~*zy'-$n*$]DuuDФ .-a@[|eWH@ln+^z8!HZ| 0Qrq\үpq2͈p'yga ivQWl=sGO.qsCɤSllrl0b4Ϧ' QZ}6ퟝ"O^YcMbGu(|>مȼvW㒼(+?z$@'M]ǚjW?tb2\#ܛNUGjj;YvY;blXpsL7yR>27+N#|9I[5 jS7V$8 L24]H"OO=)zC\ J+-}tLzӸTVP(lGtzNOP{*nd]iitRb4 ia0a(~h)JM{drƹ >@g9R/F(1祰RFYF%.5BќﬞR9!6-@D.HnhI_AœPtpٓ좖-֫B|\,ǿz2Ŋb0Vx'2CO[5"9xyc}ST=RI,W9CΥzLx%ҼZ.9=(?:; 'Z*D+2TrP}J<PWmޖYqcA oQBۚC+@HxVVve%Ns`{:$TD@G"'\Fy$@58jENIK\9`D⠌*mC e{E-3huپq` Bp'G}{#W!@#Z݆0ɟDWr;lk3FwIO416XRxPՑvEH?گ^*A={r#HS:mi&P! KM]pG}mFzI ̽mÓϜuzyK 7 %*Fq[sA Q'RGL67@'4TRIJdsV}r8,5 % *Z64Tɕ{Z^ἛZUMMLhxDTwJLQ|VDPOwھ/vz,e F/?ϣy)Jh6ߞ+ NNvQ$yҮ\_l2C|94#X/Xx{Fe$,*:th%2&sE/_KBv"]5Ro3;-fXkQ1F4+f-~(sM-l10q)7`K<*]^8TLW r5#2ң=qXoVS3q?}~vQ fgqdIևKvZ]}r8-x5쿟,LNjE&(WI.b8ѱ1N9q0/ĉg*,~:͏o<-˖Q0=fybfpŦxv>\ QǠ eƽLLz-SM;s@Zn]]ehE%}_m/x;܇)"[9o4FN KRX4AX騏̃$|,6TgO.$0fǀ9$Bp)7't9nF;91~$i}UsdWb 1>ܓ":K%bT#BSP@Hmqj%P3=qZDEnj(|e.~i@y>s->u\GR!Evp #W?/LS60T>hۑRv$RFZ-",|$ĺːJ@\q,gJw69)4Bs4L܃-J+]p9FՂ&OfBZ {#Zl1#mxC`BS +!w;зyZ;u+2?4?c!P %AׂFʝ(`T?=d (E=߉hH0oyI'JI hkQZXQ W YݥLZ݅;ϺfZ/hS;U;/N4ne՞m1_\?ǤD.N}(n7Q <7t{2YeĦN(I@EF#8!H\6z8e:e4/+J]>xY],qP"VOCum=c臝~ci VMTJ:pH2UsIrVl$S).(Cdnbܹkې@QzޙBC佤Dkq2%nPF(H=ҺҰ0] FR H%jbr4](%@TFx&lTrZ2V rg#$ȂJ/yҊ:!@FS֌B f罁ơ˷=Zjj#=c(tf(__#rLbP?-Ξ.@>hr^8V?8sTɒ~~A/fSARV嗋JJ(mh-Ltf8e y.s͇ˉA *тҿ͆C+Z?JW}QQC_$16DfV]hXrs%koٵym4ۑ@ \۶fvW}/w~ߌmb1e=']pY H?{unsRr\~-N 7JIsqWO!B"kRSoSB N л'"FSAm"QyuQ[EQJۈ  QT۲vaԔ iEQM]101pJ$HC-hj $y&:MIƚZƊmFVq!y *P7i+5cniL],v꽫J'4T oGW T^4]i}ɪ_-*tk֙ 畽>qe+n0߀Y֟Y[he Otq_!"~ЋfDR;(8.'6U ~+S09I4imF-*=`0BˡVpVwdD<++ oٴa.L_n~/GGG;o=q`7JZ]eYywwAs `,kWQ0kW݃.nx'7C$d|odHZ.j g4ᯋ(=pyu63_G?g%mhtmO/7 ;~?`90ŐG6lsι  Ƚ ?}\?8zAv9ʹdt"/AǍ7_ q?weG=Sd߇CLb ȓZ Rkgm俧z+-vu7G}U-^7~}xy:_|;HYtweO ,zLr}8Ksџnnb &Z9ZMhJRZ|cT)0o  T¶脩P )hUY/U}G"Ymymhbs=ч8dI;"V=_Ӌa U͚KeOH,2m>S' 2TvMX>)RGzVM>ŻXS=~~`IH+yYD_OʻxNFl~@ói>/߂t6aۦz6~u"RPVZi6]?z^)bKy$twӼ-V:6ץۻg [y”ֽ_qn{7]-VNt@#[yc Q4H۾F:I~Ny8ȝGttWF Ջ'EU1:'}o^#G '*:A#ZFر GlVeZ=.Coz1YHz'e4:h?e4H9#3m4#Pb;㌞(Ux}+jDflZլtܸ2=T}f[ ո#h]r;m4jݹw 0Ȣ&tϞHpwV< 8~ZNA+ CheJtdæsQDNaʡȃ!JFz)k;wU`WQk͢ :AP ֡"PѦZR0-i2OͥJ2yyexy5!!*61"jsZ_}dSjJC4;VPal!ަFu͎_!m C4SVj3wû>¼۩V_ަvb[ZG O›ܹHx D g*PhU{gC̓qm~^'Ѻ_U ~؍6L38&A 3oncqc2N6ȑCe&!MK07Z7,K$5YaҌ1ɔPU&.u&%lCjm'HP0MU DHkG^-c\ru臫1 ˞pnsr*rH J 'yBx&X4*STY5@[9%ȅ9*,|Իxk-A4nlK5UjYaqg>=s|F-s |#s+TR":8><]WٚkKjXtUz);īJV B2Sݨf4խhtBR4a: m0b}G΁CVӂ 4ue';N1J`J[q(+yV2T13ٔU('dC3# w:?mabpm C4 S>yۻnbuNl!,<3Cz6!)Ĕi?xM.E삂j| 13347ms (05:25:24.020) Feb 19 05:25:24 crc kubenswrapper[5012]: Trace[931944]: [13.347535624s] [13.347535624s] END Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.020986 5012 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.021427 5012 trace.go:236] Trace[1542930860]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 05:25:14.009) (total time: 10011ms): Feb 19 05:25:24 crc kubenswrapper[5012]: Trace[1542930860]: ---"Objects listed" error: 10011ms (05:25:24.021) Feb 19 05:25:24 crc kubenswrapper[5012]: Trace[1542930860]: [10.011647198s] [10.011647198s] END Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.021455 5012 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.021627 5012 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.022170 5012 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.022371 5012 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.034591 5012 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.055462 5012 csr.go:261] certificate signing request csr-rc45m is approved, waiting to be issued Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.059695 5012 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34834->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.059772 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:34834->192.168.126.11:17697: read: connection reset by peer" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.060209 5012 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.060280 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.062274 5012 csr.go:257] certificate signing request csr-rc45m is issued Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.256375 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.264157 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.479338 5012 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 19 05:25:24 crc kubenswrapper[5012]: W0219 05:25:24.480000 5012 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 19 05:25:24 crc kubenswrapper[5012]: W0219 05:25:24.480007 5012 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 19 05:25:24 crc kubenswrapper[5012]: W0219 05:25:24.480027 5012 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.480018 5012 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.110:52434->38.102.83.110:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18958e7f29d2c0d9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 05:25:05.253826777 +0000 UTC m=+1.287149386,LastTimestamp:2026-02-19 05:25:05.253826777 +0000 UTC m=+1.287149386,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.625050 5012 apiserver.go:52] "Watching apiserver" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.634284 5012 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.634865 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.635599 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.635619 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.635640 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.635741 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.635889 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.636173 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.636278 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.636288 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.636511 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.638369 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.639002 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.639226 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.639462 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.639734 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.639902 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.640074 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.640194 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.641034 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.644715 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 20:25:00.182369389 +0000 UTC Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.677202 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.698099 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.724659 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.732026 5012 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.740280 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.756905 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.768176 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.786968 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.800257 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.811816 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827031 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827100 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827140 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827185 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827225 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827259 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827297 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827360 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827396 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827432 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827540 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827571 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827654 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827687 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827721 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827911 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827959 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.827999 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828032 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828063 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828095 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828135 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828193 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828228 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828260 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828293 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828353 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828385 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828423 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828458 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828492 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828522 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828565 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828554 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828703 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828715 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828736 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828746 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828820 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828869 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828939 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.828978 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829016 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829053 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829088 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829125 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829162 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829197 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829236 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829248 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829270 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829362 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829403 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829452 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829434 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829508 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829585 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829619 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829648 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829673 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829697 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829724 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829751 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829775 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829798 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829829 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829856 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829876 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.829895 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:25:25.329865966 +0000 UTC m=+21.363188545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829915 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829924 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829939 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829963 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.829998 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830026 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830052 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830035 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830077 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830105 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830132 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830159 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830185 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830211 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830239 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830251 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830263 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830262 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830337 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830363 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830363 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830389 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830417 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830447 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830474 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830498 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830553 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830572 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830559 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830669 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830715 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830757 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830794 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830837 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830875 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830926 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.830965 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831005 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831046 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831084 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831121 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831160 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831196 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831235 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831273 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831340 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831382 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831418 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831456 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831491 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831530 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831578 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831616 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831655 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831697 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831734 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831772 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831808 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831845 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832142 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832637 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832638 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832632 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.831886 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832808 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832826 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832868 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832903 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832941 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832976 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833012 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833052 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833088 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833124 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833159 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833194 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833237 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833272 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833335 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833375 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833413 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833448 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833486 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833523 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833564 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833602 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833641 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833679 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833717 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833754 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833792 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833829 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833868 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833905 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833943 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833979 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834016 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834061 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834099 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834136 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834172 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834209 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834246 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834282 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834358 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834397 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834436 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834471 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834515 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834553 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834592 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834631 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834668 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834708 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834767 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834802 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834842 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834878 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834915 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834954 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.834994 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835030 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835068 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835105 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835144 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835182 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835226 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835264 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835327 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835366 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835402 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835440 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835477 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835517 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835554 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835591 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835635 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835674 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835714 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835752 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835796 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835833 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835871 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835908 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835955 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836002 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836095 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836198 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836246 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836289 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836521 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836564 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836539 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.840219 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.832804 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.841956 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833002 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833124 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833272 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833339 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.833789 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835034 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835520 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835581 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835715 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.835957 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836231 5012 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836408 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836530 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836550 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.836932 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.837295 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.837328 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.837293 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.837577 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.837598 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.837611 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.837936 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.837951 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.838341 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.838525 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.838508 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.838804 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.839128 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.839129 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.839091 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.839549 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.839601 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.839955 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.839984 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.841013 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.841107 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.841336 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.841446 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.841663 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.842899 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.843206 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.843526 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.843557 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.843673 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.844018 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.844251 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.844370 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.844495 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.844573 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.844641 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.845144 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.845687 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.841040 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.845758 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.845796 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.845823 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.845911 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.845938 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.845959 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.845981 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.845974 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846011 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846023 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846064 5012 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846115 5012 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846153 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846182 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846259 5012 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846281 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846326 5012 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846350 5012 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846370 5012 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846402 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846415 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846414 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846781 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846962 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.847009 5012 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.847056 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:25.347041904 +0000 UTC m=+21.380364473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.847256 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.847440 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.847599 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.848736 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.849507 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.849708 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.850222 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.850733 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.850776 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.850906 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.851564 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.851765 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.851799 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.851836 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.851904 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.852041 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.852480 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.852652 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.853088 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.853139 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.853107 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.853616 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.853657 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.853887 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.854085 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.854747 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.854771 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.855181 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.855589 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.855997 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.856030 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.855932 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.856623 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.859578 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.859628 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.859805 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.859985 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.860401 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.860489 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.860351 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.860547 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.860638 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.860883 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.860893 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.861278 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.862534 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.865058 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.866993 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.865180 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.865241 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.865320 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.865691 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.865770 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.866008 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.866723 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.846421 5012 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867267 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867288 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867350 5012 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867403 5012 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867426 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867445 5012 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867461 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867481 5012 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867497 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867514 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867529 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867549 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867563 5012 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867577 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867591 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867609 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867623 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867638 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867657 5012 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867670 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867683 5012 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867697 5012 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867715 5012 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867730 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867743 5012 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867757 5012 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867774 5012 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867786 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.867799 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.868619 5012 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.868891 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.869448 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.869591 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.869828 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.869939 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.869996 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.870215 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.870035 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.875013 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.870745 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.871028 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.871126 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.871388 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.871435 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.871878 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.871996 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.872445 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.872537 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.872530 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.872950 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.873035 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.873360 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.873720 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.873717 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.873598 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.874012 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.875184 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.874346 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.874421 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.874705 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.874826 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.874860 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.874898 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.875169 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.875447 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.875654 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.875489 5012 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.877169 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.877445 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.877464 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.877655 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.877688 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.877703 5012 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.877756 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.877820 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:25.377777269 +0000 UTC m=+21.411099838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.877929 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.877942 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.877951 5012 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.878012 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:25.378004665 +0000 UTC m=+21.411327234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.877953 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.875783 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.875920 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.876179 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.878075 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.876388 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.876408 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.878199 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.878454 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.878902 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.878945 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.879398 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.875786 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.879687 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.883540 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:25.383514512 +0000 UTC m=+21.416837081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.891363 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.900106 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.901358 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.901968 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.902673 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.907351 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.910583 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.916714 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.916922 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.932788 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.936838 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de"} Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.936799 5012 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de" exitCode=255 Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.937162 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.948709 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.960148 5012 scope.go:117] "RemoveContainer" containerID="093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de" Feb 19 05:25:24 crc kubenswrapper[5012]: E0219 05:25:24.963450 5012 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.963579 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.968648 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.968928 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.968881 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.968994 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969034 5012 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969043 5012 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969053 5012 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969061 5012 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969070 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969079 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969087 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969096 5012 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969105 5012 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969114 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969124 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969133 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969141 5012 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969149 5012 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969177 5012 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969186 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969195 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969204 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969212 5012 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969221 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969230 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969238 5012 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969246 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969255 5012 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969264 5012 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969273 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969283 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969292 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969317 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969326 5012 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969335 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969343 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969350 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969358 5012 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969367 5012 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969374 5012 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969383 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969457 5012 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969466 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969476 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969503 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969524 5012 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969532 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969540 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969549 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969557 5012 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969565 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969574 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969582 5012 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969591 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969598 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969606 5012 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969614 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969622 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969630 5012 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969638 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969646 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969653 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969662 5012 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969669 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969677 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969685 5012 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969693 5012 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969700 5012 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969708 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969716 5012 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969736 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969743 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969751 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969759 5012 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969768 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969775 5012 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969783 5012 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969791 5012 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969798 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969806 5012 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969815 5012 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969836 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969844 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969851 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969858 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969866 5012 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969874 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969882 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969889 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969896 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969904 5012 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969912 5012 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969920 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969927 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969935 5012 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969944 5012 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969951 5012 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969958 5012 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969967 5012 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969975 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969984 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969991 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.969999 5012 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970007 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970015 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970025 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970034 5012 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970041 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970049 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970056 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970064 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970072 5012 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970080 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970088 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970098 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970106 5012 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970114 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970122 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970130 5012 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970138 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970146 5012 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970154 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970162 5012 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970170 5012 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970178 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970186 5012 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970194 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970202 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970210 5012 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970218 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970225 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970235 5012 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970244 5012 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970253 5012 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970261 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970223 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.970269 5012 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971439 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971750 5012 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971786 5012 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971796 5012 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971807 5012 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971818 5012 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971835 5012 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971844 5012 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971852 5012 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971860 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971869 5012 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971880 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971890 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971901 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971910 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971918 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971927 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971935 5012 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971944 5012 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971952 5012 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971961 5012 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971970 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.971980 5012 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.973792 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.974994 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.978183 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.984815 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.986504 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 05:25:24 crc kubenswrapper[5012]: I0219 05:25:24.998689 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:25 crc kubenswrapper[5012]: W0219 05:25:25.010331 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-be8d521325336e805f14cb4d9074867f74fa950ad1d1d87190fd2f52ed2feb0a WatchSource:0}: Error finding container be8d521325336e805f14cb4d9074867f74fa950ad1d1d87190fd2f52ed2feb0a: Status 404 returned error can't find the container with id be8d521325336e805f14cb4d9074867f74fa950ad1d1d87190fd2f52ed2feb0a Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.011414 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.024884 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.040995 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.051572 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.061957 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.063615 5012 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-19 05:20:24 +0000 UTC, rotation deadline is 2026-12-24 06:40:52.68120113 +0000 UTC Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.063674 5012 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7393h15m27.617530183s for next certificate rotation Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.072852 5012 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.072881 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.074661 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.090947 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.177933 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.257582 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.375390 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.375571 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:25:26.375542721 +0000 UTC m=+22.408865290 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.376012 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.376149 5012 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.376207 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:26.376199387 +0000 UTC m=+22.409521956 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.476734 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.476779 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.476829 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.476944 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.476946 5012 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.477038 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:26.477017637 +0000 UTC m=+22.510340226 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.476961 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.477073 5012 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.477066 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.477110 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.477128 5012 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.477113 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:26.477103229 +0000 UTC m=+22.510425798 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:25 crc kubenswrapper[5012]: E0219 05:25:25.477276 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:26.477218982 +0000 UTC m=+22.510541551 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.645891 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 23:06:42.386100089 +0000 UTC Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.776247 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-4cs9h"] Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.776764 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4cs9h" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.782234 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.782297 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.782493 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.800368 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-5lt44"] Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.800911 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.802395 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.804915 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.804986 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.804915 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.805259 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.805201 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.820027 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.838668 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.870470 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.882619 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c6kt\" (UniqueName: \"kubernetes.io/projected/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-kube-api-access-5c6kt\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.882666 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2sbf\" (UniqueName: \"kubernetes.io/projected/93b25601-4740-4c9d-9e62-0e7566484633-kube-api-access-r2sbf\") pod \"node-resolver-4cs9h\" (UID: \"93b25601-4740-4c9d-9e62-0e7566484633\") " pod="openshift-dns/node-resolver-4cs9h" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.882703 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/93b25601-4740-4c9d-9e62-0e7566484633-hosts-file\") pod \"node-resolver-4cs9h\" (UID: \"93b25601-4740-4c9d-9e62-0e7566484633\") " pod="openshift-dns/node-resolver-4cs9h" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.882723 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-proxy-tls\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.882812 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-mcd-auth-proxy-config\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.882832 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-rootfs\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.890385 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.910912 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.929349 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.945855 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"be8d521325336e805f14cb4d9074867f74fa950ad1d1d87190fd2f52ed2feb0a"} Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.947583 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350"} Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.947607 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5"} Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.947616 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"524e62f35d3a952d5a77bf2eb5da277810458a7ad787eebba44e43e3f108b4bd"} Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.949499 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.951329 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0"} Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.952050 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.953596 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d"} Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.953624 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"126647a0fe7aa63aac5402eeaf92a9fcb8fd378d8fb027865898010033040917"} Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.957437 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.972123 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.983285 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/93b25601-4740-4c9d-9e62-0e7566484633-hosts-file\") pod \"node-resolver-4cs9h\" (UID: \"93b25601-4740-4c9d-9e62-0e7566484633\") " pod="openshift-dns/node-resolver-4cs9h" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.983353 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-proxy-tls\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.983385 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-mcd-auth-proxy-config\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.983413 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-rootfs\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.983443 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c6kt\" (UniqueName: \"kubernetes.io/projected/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-kube-api-access-5c6kt\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.983386 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/93b25601-4740-4c9d-9e62-0e7566484633-hosts-file\") pod \"node-resolver-4cs9h\" (UID: \"93b25601-4740-4c9d-9e62-0e7566484633\") " pod="openshift-dns/node-resolver-4cs9h" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.983465 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2sbf\" (UniqueName: \"kubernetes.io/projected/93b25601-4740-4c9d-9e62-0e7566484633-kube-api-access-r2sbf\") pod \"node-resolver-4cs9h\" (UID: \"93b25601-4740-4c9d-9e62-0e7566484633\") " pod="openshift-dns/node-resolver-4cs9h" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.983627 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-rootfs\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.984005 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-mcd-auth-proxy-config\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.987219 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:25 crc kubenswrapper[5012]: I0219 05:25:25.990733 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-proxy-tls\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.023069 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.029033 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2sbf\" (UniqueName: \"kubernetes.io/projected/93b25601-4740-4c9d-9e62-0e7566484633-kube-api-access-r2sbf\") pod \"node-resolver-4cs9h\" (UID: \"93b25601-4740-4c9d-9e62-0e7566484633\") " pod="openshift-dns/node-resolver-4cs9h" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.039295 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c6kt\" (UniqueName: \"kubernetes.io/projected/f72c12f8-ba8a-4e43-aba7-f3c31a59181a-kube-api-access-5c6kt\") pod \"machine-config-daemon-5lt44\" (UID: \"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\") " pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.047164 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.067807 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.091224 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.093244 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4cs9h" Feb 19 05:25:26 crc kubenswrapper[5012]: W0219 05:25:26.105007 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93b25601_4740_4c9d_9e62_0e7566484633.slice/crio-e84114dbdef9b1d4228b5db4ca491ad5449687eaf133c1e4370be544b7ff2c61 WatchSource:0}: Error finding container e84114dbdef9b1d4228b5db4ca491ad5449687eaf133c1e4370be544b7ff2c61: Status 404 returned error can't find the container with id e84114dbdef9b1d4228b5db4ca491ad5449687eaf133c1e4370be544b7ff2c61 Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.117719 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.121249 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: W0219 05:25:26.128532 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf72c12f8_ba8a_4e43_aba7_f3c31a59181a.slice/crio-97b1b7f1472e272dc53918364c6c295394ca63cce7cb4adab3c71102c1375740 WatchSource:0}: Error finding container 97b1b7f1472e272dc53918364c6c295394ca63cce7cb4adab3c71102c1375740: Status 404 returned error can't find the container with id 97b1b7f1472e272dc53918364c6c295394ca63cce7cb4adab3c71102c1375740 Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.149152 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.171119 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.191349 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.209624 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.388123 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.388389 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:25:28.388341044 +0000 UTC m=+24.421663803 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.388884 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.389048 5012 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.389115 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:28.389106333 +0000 UTC m=+24.422428902 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.489825 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.489902 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.489935 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.490064 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.490097 5012 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.490163 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.490175 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.490183 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:28.490160259 +0000 UTC m=+24.523482828 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.490192 5012 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.490263 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:28.490240501 +0000 UTC m=+24.523563070 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.490106 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.490284 5012 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.490323 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:28.490316683 +0000 UTC m=+24.523639252 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.646249 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 03:52:54.915871811 +0000 UTC Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.700910 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-lkrsg"] Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.701239 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.701824 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.701898 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.702173 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.702206 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.702227 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:26 crc kubenswrapper[5012]: E0219 05:25:26.702427 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.704541 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.704865 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.705059 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.705178 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.706569 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.717265 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.718113 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.719237 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.719881 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.720868 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.721402 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.721970 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.722863 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.722955 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.723947 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.724865 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.725390 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.726488 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.727036 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.727548 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.728436 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.728919 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.729894 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.730373 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.730901 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.731859 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.732330 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.733225 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.733650 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.734667 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.735103 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.735693 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.736704 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.737153 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.738230 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.738716 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.739553 5012 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.739654 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.741372 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.742252 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.742709 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.743282 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.744131 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.744920 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.745793 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.746446 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.747429 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.747899 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.748836 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.749464 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.750416 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.750889 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.751744 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.752249 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.753352 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.753846 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.754726 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.755159 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.756047 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.756599 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.757030 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.757914 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-wv2tq"] Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.758621 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.760901 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.762101 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.763816 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793169 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e7a04e36-fbaa-4de1-871a-7225433eebb0-cni-binary-copy\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793210 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-daemon-config\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793234 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-system-cni-dir\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793267 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-var-lib-cni-multus\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793286 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-hostroot\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793322 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-cnibin\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793341 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-run-k8s-cni-cncf-io\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793404 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-etc-kubernetes\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793452 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwlt7\" (UniqueName: \"kubernetes.io/projected/e7a04e36-fbaa-4de1-871a-7225433eebb0-kube-api-access-nwlt7\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793498 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-run-netns\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793518 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-var-lib-kubelet\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793535 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-run-multus-certs\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793569 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-os-release\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793590 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-socket-dir-parent\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793605 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-conf-dir\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793627 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-cni-dir\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.793645 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-var-lib-cni-bin\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.795581 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.817293 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.834069 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.852538 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.871963 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.894897 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-var-lib-cni-bin\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.894959 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6f3af476-577a-46f9-a71c-60fab8fdaa68-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.894996 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e7a04e36-fbaa-4de1-871a-7225433eebb0-cni-binary-copy\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895038 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-daemon-config\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895069 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-system-cni-dir\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895114 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-var-lib-cni-multus\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895125 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-var-lib-cni-bin\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895251 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-var-lib-cni-multus\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895158 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-cnibin\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895280 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-cnibin\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895411 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-run-k8s-cni-cncf-io\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895417 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-system-cni-dir\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895484 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-etc-kubernetes\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895439 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-etc-kubernetes\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895513 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-run-k8s-cni-cncf-io\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895553 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6f3af476-577a-46f9-a71c-60fab8fdaa68-cni-binary-copy\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895605 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-run-netns\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895643 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895677 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94dgd\" (UniqueName: \"kubernetes.io/projected/6f3af476-577a-46f9-a71c-60fab8fdaa68-kube-api-access-94dgd\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895715 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-run-netns\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895720 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-socket-dir-parent\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895799 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-cni-dir\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895827 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-daemon-config\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895857 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-cnibin\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895895 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-hostroot\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.895973 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-system-cni-dir\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896009 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-os-release\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896020 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-socket-dir-parent\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896045 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwlt7\" (UniqueName: \"kubernetes.io/projected/e7a04e36-fbaa-4de1-871a-7225433eebb0-kube-api-access-nwlt7\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896023 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-hostroot\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896109 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-var-lib-kubelet\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896139 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-cni-dir\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896151 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-run-multus-certs\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896178 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-os-release\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896193 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-var-lib-kubelet\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896211 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-conf-dir\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896265 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-host-run-multus-certs\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896238 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-multus-conf-dir\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896355 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e7a04e36-fbaa-4de1-871a-7225433eebb0-cni-binary-copy\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.896545 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e7a04e36-fbaa-4de1-871a-7225433eebb0-os-release\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.900190 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.915596 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.932189 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.933129 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwlt7\" (UniqueName: \"kubernetes.io/projected/e7a04e36-fbaa-4de1-871a-7225433eebb0-kube-api-access-nwlt7\") pod \"multus-lkrsg\" (UID: \"e7a04e36-fbaa-4de1-871a-7225433eebb0\") " pod="openshift-multus/multus-lkrsg" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.955091 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.967164 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2"} Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.967234 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049"} Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.967256 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"97b1b7f1472e272dc53918364c6c295394ca63cce7cb4adab3c71102c1375740"} Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.970277 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4cs9h" event={"ID":"93b25601-4740-4c9d-9e62-0e7566484633","Type":"ContainerStarted","Data":"2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe"} Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.970380 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4cs9h" event={"ID":"93b25601-4740-4c9d-9e62-0e7566484633","Type":"ContainerStarted","Data":"e84114dbdef9b1d4228b5db4ca491ad5449687eaf133c1e4370be544b7ff2c61"} Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.983160 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.997090 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-cnibin\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.997163 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-system-cni-dir\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.997201 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-os-release\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.997261 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6f3af476-577a-46f9-a71c-60fab8fdaa68-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.997269 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-system-cni-dir\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.997269 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-cnibin\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.997385 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-os-release\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.997390 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6f3af476-577a-46f9-a71c-60fab8fdaa68-cni-binary-copy\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.997524 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.997561 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94dgd\" (UniqueName: \"kubernetes.io/projected/6f3af476-577a-46f9-a71c-60fab8fdaa68-kube-api-access-94dgd\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.998087 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:26Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.998449 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6f3af476-577a-46f9-a71c-60fab8fdaa68-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.998665 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6f3af476-577a-46f9-a71c-60fab8fdaa68-cni-binary-copy\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:26 crc kubenswrapper[5012]: I0219 05:25:26.999470 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6f3af476-577a-46f9-a71c-60fab8fdaa68-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.013059 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.016976 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94dgd\" (UniqueName: \"kubernetes.io/projected/6f3af476-577a-46f9-a71c-60fab8fdaa68-kube-api-access-94dgd\") pod \"multus-additional-cni-plugins-wv2tq\" (UID: \"6f3af476-577a-46f9-a71c-60fab8fdaa68\") " pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.018844 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lkrsg" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.027732 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: W0219 05:25:27.041689 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7a04e36_fbaa_4de1_871a_7225433eebb0.slice/crio-8b4133672645efb3ebea44ed010f4748c5a8fb90bf6471cf32a9fe9215d736b0 WatchSource:0}: Error finding container 8b4133672645efb3ebea44ed010f4748c5a8fb90bf6471cf32a9fe9215d736b0: Status 404 returned error can't find the container with id 8b4133672645efb3ebea44ed010f4748c5a8fb90bf6471cf32a9fe9215d736b0 Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.048734 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.078485 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.110403 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.113200 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8ff9w"] Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.113967 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.118680 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.118819 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.119029 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.119119 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.122813 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.122957 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.123107 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.142577 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.160809 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.193267 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.202189 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovn-node-metrics-cert\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.202238 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-node-log\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.202260 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-slash\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203135 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-netns\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203187 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-ovn\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203213 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-config\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203254 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-var-lib-openvswitch\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203292 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-openvswitch\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203351 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj2rz\" (UniqueName: \"kubernetes.io/projected/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-kube-api-access-sj2rz\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203394 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-systemd\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203423 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-netd\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203457 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-ovn-kubernetes\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203489 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-env-overrides\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203509 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-script-lib\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203539 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-bin\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203566 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-kubelet\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203600 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-etc-openvswitch\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203622 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-systemd-units\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203656 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-log-socket\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.203680 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.207770 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.230851 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.242837 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.257465 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.274360 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.284960 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.299886 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.304929 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-systemd-units\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.304993 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-log-socket\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305026 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305041 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-systemd-units\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305054 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovn-node-metrics-cert\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305073 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-node-log\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305091 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305094 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-slash\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305130 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-ovn\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305135 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-slash\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305150 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-config\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305177 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-node-log\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305184 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-netns\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305206 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-ovn\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305210 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-openvswitch\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305232 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-var-lib-openvswitch\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305251 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj2rz\" (UniqueName: \"kubernetes.io/projected/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-kube-api-access-sj2rz\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305269 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-systemd\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305310 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-netd\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305283 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-log-socket\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305328 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-script-lib\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305469 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-ovn-kubernetes\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305499 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-env-overrides\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305551 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-systemd\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305591 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-bin\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305557 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-bin\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305648 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-kubelet\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305675 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-etc-openvswitch\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305682 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-netns\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305718 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-openvswitch\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305749 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-etc-openvswitch\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305767 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-kubelet\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305702 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-netd\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305628 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-ovn-kubernetes\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.305668 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-var-lib-openvswitch\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.307220 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-config\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.307526 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-script-lib\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.307558 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-env-overrides\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.315978 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.321436 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovn-node-metrics-cert\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.324229 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj2rz\" (UniqueName: \"kubernetes.io/projected/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-kube-api-access-sj2rz\") pod \"ovnkube-node-8ff9w\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.335346 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.352490 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.381224 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.397759 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.409502 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.422986 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.438406 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.445934 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.646682 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 11:49:27.580680535 +0000 UTC Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.975528 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691"} Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.977415 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c" exitCode=0 Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.977487 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c"} Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.977519 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"6412d35e0c37d9d105ee4ca82031f54078f7add4cd5d9abd98a4a8c14bd96adb"} Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.982013 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" event={"ID":"6f3af476-577a-46f9-a71c-60fab8fdaa68","Type":"ContainerStarted","Data":"904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7"} Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.982051 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" event={"ID":"6f3af476-577a-46f9-a71c-60fab8fdaa68","Type":"ContainerStarted","Data":"3efd5bcfe001b500cc1296a01d9d80ac1878ab69cd10be70e5906643b9c996bc"} Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.985470 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lkrsg" event={"ID":"e7a04e36-fbaa-4de1-871a-7225433eebb0","Type":"ContainerStarted","Data":"10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061"} Feb 19 05:25:27 crc kubenswrapper[5012]: I0219 05:25:27.985538 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lkrsg" event={"ID":"e7a04e36-fbaa-4de1-871a-7225433eebb0","Type":"ContainerStarted","Data":"8b4133672645efb3ebea44ed010f4748c5a8fb90bf6471cf32a9fe9215d736b0"} Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.005117 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:27Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.021728 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.041025 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.060587 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.072447 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.089389 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.100520 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.113011 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.129955 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.146226 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.165625 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.182781 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.197945 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.215601 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.233738 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.258651 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.283241 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.298698 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.321589 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.335731 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.348711 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.382543 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.407702 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.424898 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.425074 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.425209 5012 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.425268 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:32.425251763 +0000 UTC m=+28.458574332 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.425362 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:25:32.425352296 +0000 UTC m=+28.458674865 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.451970 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.467742 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.479924 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:28Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.525577 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.525635 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.525659 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.525788 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.525808 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.525819 5012 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.525862 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:32.525848988 +0000 UTC m=+28.559171557 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.525922 5012 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.526049 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.526093 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:32.526066873 +0000 UTC m=+28.559389442 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.526103 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.526123 5012 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.526205 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:32.526181446 +0000 UTC m=+28.559504015 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.648227 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 18:50:33.842189684 +0000 UTC Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.702129 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.702163 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.702342 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.702425 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.702677 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:28 crc kubenswrapper[5012]: E0219 05:25:28.702578 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.993102 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8"} Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.993232 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7"} Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.993269 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4"} Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.993297 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0"} Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.993369 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771"} Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.993395 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6"} Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.995115 5012 generic.go:334] "Generic (PLEG): container finished" podID="6f3af476-577a-46f9-a71c-60fab8fdaa68" containerID="904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7" exitCode=0 Feb 19 05:25:28 crc kubenswrapper[5012]: I0219 05:25:28.995237 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" event={"ID":"6f3af476-577a-46f9-a71c-60fab8fdaa68","Type":"ContainerDied","Data":"904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7"} Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.012197 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.027162 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.049087 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.073391 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.089181 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.103823 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.116598 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.133079 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.147905 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.163138 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.182284 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.197664 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.223449 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:29Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:29 crc kubenswrapper[5012]: I0219 05:25:29.649357 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 05:50:02.458686832 +0000 UTC Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.001390 5012 generic.go:334] "Generic (PLEG): container finished" podID="6f3af476-577a-46f9-a71c-60fab8fdaa68" containerID="5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9" exitCode=0 Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.001431 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" event={"ID":"6f3af476-577a-46f9-a71c-60fab8fdaa68","Type":"ContainerDied","Data":"5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9"} Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.027567 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.046715 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.066691 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.085914 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.105869 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.123013 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.140499 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.163155 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.178811 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.195859 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.212451 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.232424 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.251213 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.422606 5012 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.426433 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.426488 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.426504 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.426689 5012 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.437468 5012 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.437930 5012 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.439197 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.439240 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.439255 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.439274 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.439290 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:30Z","lastTransitionTime":"2026-02-19T05:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:30 crc kubenswrapper[5012]: E0219 05:25:30.459269 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.464138 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.464177 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.464190 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.464203 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.464216 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:30Z","lastTransitionTime":"2026-02-19T05:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:30 crc kubenswrapper[5012]: E0219 05:25:30.482964 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.488480 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.488539 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.488557 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.488582 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.488597 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:30Z","lastTransitionTime":"2026-02-19T05:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:30 crc kubenswrapper[5012]: E0219 05:25:30.507502 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.516999 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.517064 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.517081 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.517108 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.517131 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:30Z","lastTransitionTime":"2026-02-19T05:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:30 crc kubenswrapper[5012]: E0219 05:25:30.539446 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.545080 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.545144 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.545163 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.545195 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.545217 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:30Z","lastTransitionTime":"2026-02-19T05:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:30 crc kubenswrapper[5012]: E0219 05:25:30.562793 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:30Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:30 crc kubenswrapper[5012]: E0219 05:25:30.563126 5012 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.565576 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.565722 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.565831 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.565926 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.566023 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:30Z","lastTransitionTime":"2026-02-19T05:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.650940 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 02:08:07.524514588 +0000 UTC Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.669955 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.670012 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.670047 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.670073 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.670093 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:30Z","lastTransitionTime":"2026-02-19T05:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.702793 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.702865 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:30 crc kubenswrapper[5012]: E0219 05:25:30.702953 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:30 crc kubenswrapper[5012]: E0219 05:25:30.703088 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.702793 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:30 crc kubenswrapper[5012]: E0219 05:25:30.703273 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.773836 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.773894 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.773941 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.773972 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.773996 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:30Z","lastTransitionTime":"2026-02-19T05:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.877637 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.877671 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.877682 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.877714 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.877729 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:30Z","lastTransitionTime":"2026-02-19T05:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.981994 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.982063 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.982102 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.982134 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:30 crc kubenswrapper[5012]: I0219 05:25:30.982158 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:30Z","lastTransitionTime":"2026-02-19T05:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.010563 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.014415 5012 generic.go:334] "Generic (PLEG): container finished" podID="6f3af476-577a-46f9-a71c-60fab8fdaa68" containerID="fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e" exitCode=0 Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.014484 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" event={"ID":"6f3af476-577a-46f9-a71c-60fab8fdaa68","Type":"ContainerDied","Data":"fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.081382 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.084427 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.084458 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.084471 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.084492 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.084505 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:31Z","lastTransitionTime":"2026-02-19T05:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.098208 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.120141 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.135720 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.148722 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.161836 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.174181 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.186888 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.186921 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.186930 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.186948 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.186959 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:31Z","lastTransitionTime":"2026-02-19T05:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.194412 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.205864 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.215431 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.227161 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.237216 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.251846 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:31Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.292277 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.292386 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.292456 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.292489 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.292509 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:31Z","lastTransitionTime":"2026-02-19T05:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.397771 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.397846 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.397867 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.397894 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.397914 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:31Z","lastTransitionTime":"2026-02-19T05:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.501140 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.501205 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.501222 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.501260 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.501280 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:31Z","lastTransitionTime":"2026-02-19T05:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.520195 5012 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.604752 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.604815 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.604833 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.604861 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.604881 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:31Z","lastTransitionTime":"2026-02-19T05:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.651510 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 22:16:22.346602674 +0000 UTC Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.709784 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.709841 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.709860 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.709886 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.709905 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:31Z","lastTransitionTime":"2026-02-19T05:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.812790 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.812852 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.812869 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.812898 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.812919 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:31Z","lastTransitionTime":"2026-02-19T05:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.915549 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.915626 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.915653 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.915684 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:31 crc kubenswrapper[5012]: I0219 05:25:31.915710 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:31Z","lastTransitionTime":"2026-02-19T05:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.019158 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.019218 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.019238 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.019266 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.019286 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:32Z","lastTransitionTime":"2026-02-19T05:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.023203 5012 generic.go:334] "Generic (PLEG): container finished" podID="6f3af476-577a-46f9-a71c-60fab8fdaa68" containerID="907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827" exitCode=0 Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.023264 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" event={"ID":"6f3af476-577a-46f9-a71c-60fab8fdaa68","Type":"ContainerDied","Data":"907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.047085 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.067553 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.085858 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.106254 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.123013 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.123862 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.123902 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.123917 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.123937 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.123950 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:32Z","lastTransitionTime":"2026-02-19T05:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.145202 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.172773 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.199197 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.219074 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.227019 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.227049 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.227060 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.227077 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.227090 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:32Z","lastTransitionTime":"2026-02-19T05:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.250152 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.269374 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.284412 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.299884 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.330783 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.330833 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.330849 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.330875 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.330922 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:32Z","lastTransitionTime":"2026-02-19T05:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.433975 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.434022 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.434032 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.434049 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.434063 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:32Z","lastTransitionTime":"2026-02-19T05:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.472509 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.472648 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.472765 5012 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.472814 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:25:40.472754246 +0000 UTC m=+36.506076865 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.472875 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:40.472854829 +0000 UTC m=+36.506177438 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.537824 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.537916 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.537942 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.537975 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.537999 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:32Z","lastTransitionTime":"2026-02-19T05:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.573877 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.573997 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.574056 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.574184 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.574261 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.574266 5012 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.574280 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.574359 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.574382 5012 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.574404 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:40.574380786 +0000 UTC m=+36.607703395 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.574458 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:40.574433358 +0000 UTC m=+36.607755957 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.574291 5012 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.574602 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:40.574565661 +0000 UTC m=+36.607888270 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.641937 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.642097 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.642129 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.642205 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.642229 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:32Z","lastTransitionTime":"2026-02-19T05:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.651633 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 18:41:31.000674437 +0000 UTC Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.702519 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.702574 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.702758 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.702825 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.703035 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:32 crc kubenswrapper[5012]: E0219 05:25:32.703363 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.746273 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.746371 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.746393 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.746419 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.746439 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:32Z","lastTransitionTime":"2026-02-19T05:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.849497 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.849586 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.849611 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.849650 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.849671 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:32Z","lastTransitionTime":"2026-02-19T05:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.952829 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.952920 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.952939 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.952968 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:32 crc kubenswrapper[5012]: I0219 05:25:32.952987 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:32Z","lastTransitionTime":"2026-02-19T05:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.029026 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" event={"ID":"6f3af476-577a-46f9-a71c-60fab8fdaa68","Type":"ContainerStarted","Data":"b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.042626 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.056963 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.057010 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.057022 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.057045 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.057058 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:33Z","lastTransitionTime":"2026-02-19T05:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.058529 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.087273 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.108382 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.124842 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.144627 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.161443 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.161511 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.161530 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.161560 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.161583 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:33Z","lastTransitionTime":"2026-02-19T05:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.164631 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.185597 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.204928 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.225934 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.248872 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.264923 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.265294 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.265333 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.265343 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.265360 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.265371 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:33Z","lastTransitionTime":"2026-02-19T05:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.284960 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:33Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.370212 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.370266 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.370281 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.370322 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.370339 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:33Z","lastTransitionTime":"2026-02-19T05:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.473616 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.474090 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.474114 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.474151 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.474170 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:33Z","lastTransitionTime":"2026-02-19T05:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.577001 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.577051 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.577063 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.577080 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.577093 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:33Z","lastTransitionTime":"2026-02-19T05:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.652222 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:18:58.169749953 +0000 UTC Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.680499 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.680551 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.680570 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.680597 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.680615 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:33Z","lastTransitionTime":"2026-02-19T05:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.784759 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.784812 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.784826 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.784851 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.784866 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:33Z","lastTransitionTime":"2026-02-19T05:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.887718 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.887765 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.887779 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.887797 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.887811 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:33Z","lastTransitionTime":"2026-02-19T05:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.991623 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.991689 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.991708 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.991736 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:33 crc kubenswrapper[5012]: I0219 05:25:33.991757 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:33Z","lastTransitionTime":"2026-02-19T05:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.038528 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.039753 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.039807 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.039823 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.051660 5012 generic.go:334] "Generic (PLEG): container finished" podID="6f3af476-577a-46f9-a71c-60fab8fdaa68" containerID="b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68" exitCode=0 Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.051721 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" event={"ID":"6f3af476-577a-46f9-a71c-60fab8fdaa68","Type":"ContainerDied","Data":"b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.065225 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.081980 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.082874 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.082868 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.095644 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.095690 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.095701 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.095721 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.095735 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:34Z","lastTransitionTime":"2026-02-19T05:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.107502 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.121503 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.135661 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.152645 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.169510 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.185187 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.199010 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.199068 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.199079 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.199097 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.199166 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:34Z","lastTransitionTime":"2026-02-19T05:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.200580 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.216747 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.231107 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.251617 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.264571 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.278384 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.296444 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.301727 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.301784 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.301802 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.301828 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.301847 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:34Z","lastTransitionTime":"2026-02-19T05:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.313653 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.335397 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.347215 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.360198 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.373671 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.394145 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.405395 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.405438 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.405451 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.405474 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.405492 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:34Z","lastTransitionTime":"2026-02-19T05:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.408203 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.421884 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.440385 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.457604 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.469930 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.509389 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.509445 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.509466 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.509492 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.509514 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:34Z","lastTransitionTime":"2026-02-19T05:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.612611 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.612858 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.612919 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.613011 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.613102 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:34Z","lastTransitionTime":"2026-02-19T05:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.653233 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 10:10:50.538376142 +0000 UTC Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.693935 5012 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.702383 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.702407 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:34 crc kubenswrapper[5012]: E0219 05:25:34.702887 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.703041 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:34 crc kubenswrapper[5012]: E0219 05:25:34.702902 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:34 crc kubenswrapper[5012]: E0219 05:25:34.703229 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.716001 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.716382 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.716546 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.716733 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.716932 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:34Z","lastTransitionTime":"2026-02-19T05:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.723180 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.744643 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.763974 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.788121 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.813277 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.819555 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.819615 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.819634 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.819659 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.819678 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:34Z","lastTransitionTime":"2026-02-19T05:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.834761 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.868190 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.889062 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.904732 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.918952 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.923148 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.923205 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.923220 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.923242 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.923258 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:34Z","lastTransitionTime":"2026-02-19T05:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.939975 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.956298 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:34 crc kubenswrapper[5012]: I0219 05:25:34.975034 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:34Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.026281 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.026351 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.026368 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.026386 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.026401 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:35Z","lastTransitionTime":"2026-02-19T05:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.062964 5012 generic.go:334] "Generic (PLEG): container finished" podID="6f3af476-577a-46f9-a71c-60fab8fdaa68" containerID="0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a" exitCode=0 Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.063033 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" event={"ID":"6f3af476-577a-46f9-a71c-60fab8fdaa68","Type":"ContainerDied","Data":"0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a"} Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.082276 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.105359 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.123647 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.129420 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.129486 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.129509 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.129538 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.129559 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:35Z","lastTransitionTime":"2026-02-19T05:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.144813 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.162391 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.176677 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.193962 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.211088 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.232508 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.232557 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.232576 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.232600 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.232617 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:35Z","lastTransitionTime":"2026-02-19T05:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.238734 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.259706 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.276398 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.293578 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.316053 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:35Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.336956 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.337028 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.337047 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.337078 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.337097 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:35Z","lastTransitionTime":"2026-02-19T05:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.439666 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.439700 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.439728 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.439743 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.439754 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:35Z","lastTransitionTime":"2026-02-19T05:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.542653 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.542682 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.542690 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.542703 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.542712 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:35Z","lastTransitionTime":"2026-02-19T05:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.645328 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.645363 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.645371 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.645393 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.645402 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:35Z","lastTransitionTime":"2026-02-19T05:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.653771 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:35:44.8451045 +0000 UTC Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.748570 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.748600 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.748611 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.748624 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.748634 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:35Z","lastTransitionTime":"2026-02-19T05:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.854607 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.854675 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.854693 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.854719 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.854738 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:35Z","lastTransitionTime":"2026-02-19T05:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.958672 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.958726 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.958739 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.958760 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:35 crc kubenswrapper[5012]: I0219 05:25:35.958774 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:35Z","lastTransitionTime":"2026-02-19T05:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.061895 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.061956 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.061976 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.062003 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.062018 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:36Z","lastTransitionTime":"2026-02-19T05:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.073619 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" event={"ID":"6f3af476-577a-46f9-a71c-60fab8fdaa68","Type":"ContainerStarted","Data":"0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.095573 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.110664 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.131375 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.148227 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.163935 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.165619 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.165672 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.165684 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.165710 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.165727 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:36Z","lastTransitionTime":"2026-02-19T05:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.187009 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.205271 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.242116 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.267026 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.269182 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.269213 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.269229 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.269253 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.269271 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:36Z","lastTransitionTime":"2026-02-19T05:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.289661 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.320445 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.353967 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.371618 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.371660 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.371670 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.371687 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.371699 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:36Z","lastTransitionTime":"2026-02-19T05:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.388101 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:36Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.474718 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.474762 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.474772 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.474789 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.474799 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:36Z","lastTransitionTime":"2026-02-19T05:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.579076 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.579140 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.579159 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.579184 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.579202 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:36Z","lastTransitionTime":"2026-02-19T05:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.654148 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 00:43:16.147267767 +0000 UTC Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.683013 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.683070 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.683089 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.683117 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.683138 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:36Z","lastTransitionTime":"2026-02-19T05:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.686545 5012 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.702750 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.702785 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:36 crc kubenswrapper[5012]: E0219 05:25:36.702909 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.702971 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:36 crc kubenswrapper[5012]: E0219 05:25:36.703128 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:36 crc kubenswrapper[5012]: E0219 05:25:36.703250 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.786438 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.786491 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.786511 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.786537 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.786557 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:36Z","lastTransitionTime":"2026-02-19T05:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.890327 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.890388 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.890411 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.890464 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.890485 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:36Z","lastTransitionTime":"2026-02-19T05:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.992929 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.992974 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.992995 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.993019 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:36 crc kubenswrapper[5012]: I0219 05:25:36.993038 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:36Z","lastTransitionTime":"2026-02-19T05:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.079709 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/0.log" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.083900 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b" exitCode=1 Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.083962 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b"} Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.085135 5012 scope.go:117] "RemoveContainer" containerID="7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.096377 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.096430 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.096447 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.096477 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.096496 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:37Z","lastTransitionTime":"2026-02-19T05:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.111089 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.136112 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.157327 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.182163 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.201025 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.201083 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.201105 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.201134 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.201152 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:37Z","lastTransitionTime":"2026-02-19T05:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.206502 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.239449 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"message\\\":\\\"handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:36.450683 6353 factory.go:656] Stopping watch factory\\\\nI0219 05:25:36.450684 6353 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:36.450691 6353 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:36.450703 6353 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:36.450745 6353 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.450824 6353 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451051 6353 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451689 6353 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:36.451824 6353 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451966 6353 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.452179 6353 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.257108 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.270442 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.288053 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.309663 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.309751 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.309766 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.309803 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.309814 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:37Z","lastTransitionTime":"2026-02-19T05:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.317405 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.333904 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.350048 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.366537 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:37Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.413246 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.413373 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.413392 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.413422 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.413445 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:37Z","lastTransitionTime":"2026-02-19T05:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.516719 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.516766 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.516780 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.516799 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.516811 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:37Z","lastTransitionTime":"2026-02-19T05:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.620136 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.620205 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.620223 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.620646 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.620702 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:37Z","lastTransitionTime":"2026-02-19T05:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.654519 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 15:51:14.96217822 +0000 UTC Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.724215 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.724259 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.724269 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.724287 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.724311 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:37Z","lastTransitionTime":"2026-02-19T05:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.827622 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.827668 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.827677 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.827692 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.827704 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:37Z","lastTransitionTime":"2026-02-19T05:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.930819 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.930898 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.930922 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.930962 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:37 crc kubenswrapper[5012]: I0219 05:25:37.930982 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:37Z","lastTransitionTime":"2026-02-19T05:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.034256 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.034331 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.034344 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.034399 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.034416 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:38Z","lastTransitionTime":"2026-02-19T05:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.091840 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/0.log" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.096045 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.096611 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.118230 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.137025 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.137079 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.137091 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.137115 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.137131 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:38Z","lastTransitionTime":"2026-02-19T05:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.137284 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.159352 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.181859 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.207696 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.232242 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.240820 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.240855 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.240866 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.240885 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.240898 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:38Z","lastTransitionTime":"2026-02-19T05:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.255055 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.277169 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.294980 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.317189 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.335014 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.343726 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.343774 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.343793 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.343818 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.343838 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:38Z","lastTransitionTime":"2026-02-19T05:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.355719 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.389429 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"message\\\":\\\"handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:36.450683 6353 factory.go:656] Stopping watch factory\\\\nI0219 05:25:36.450684 6353 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:36.450691 6353 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:36.450703 6353 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:36.450745 6353 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.450824 6353 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451051 6353 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451689 6353 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:36.451824 6353 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451966 6353 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.452179 6353 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:38Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.446453 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.446544 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.446563 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.446617 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.446636 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:38Z","lastTransitionTime":"2026-02-19T05:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.550518 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.550559 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.550576 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.550598 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.550616 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:38Z","lastTransitionTime":"2026-02-19T05:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.654046 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.654114 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.654133 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.654158 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.654177 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:38Z","lastTransitionTime":"2026-02-19T05:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.654827 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 11:54:37.280996348 +0000 UTC Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.702084 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.702120 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:38 crc kubenswrapper[5012]: E0219 05:25:38.702360 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:38 crc kubenswrapper[5012]: E0219 05:25:38.702505 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.702687 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:38 crc kubenswrapper[5012]: E0219 05:25:38.703015 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.757713 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.757948 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.758158 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.758298 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.758504 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:38Z","lastTransitionTime":"2026-02-19T05:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.862404 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.862659 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.862802 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.862967 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.863097 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:38Z","lastTransitionTime":"2026-02-19T05:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.965562 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.965624 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.965642 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.965671 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:38 crc kubenswrapper[5012]: I0219 05:25:38.965691 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:38Z","lastTransitionTime":"2026-02-19T05:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.069466 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.069539 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.069556 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.069584 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.069605 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:39Z","lastTransitionTime":"2026-02-19T05:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.103911 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/1.log" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.105195 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/0.log" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.111708 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c" exitCode=1 Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.111852 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c"} Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.112186 5012 scope.go:117] "RemoveContainer" containerID="7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.113696 5012 scope.go:117] "RemoveContainer" containerID="a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c" Feb 19 05:25:39 crc kubenswrapper[5012]: E0219 05:25:39.114192 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.136029 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.161145 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.173850 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.173903 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.173921 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.173947 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.173965 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:39Z","lastTransitionTime":"2026-02-19T05:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.197951 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"message\\\":\\\"handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:36.450683 6353 factory.go:656] Stopping watch factory\\\\nI0219 05:25:36.450684 6353 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:36.450691 6353 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:36.450703 6353 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:36.450745 6353 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.450824 6353 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451051 6353 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451689 6353 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:36.451824 6353 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451966 6353 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.452179 6353 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.219858 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.237241 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.257593 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.276034 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.277060 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.277115 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.277133 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.277161 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.277181 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:39Z","lastTransitionTime":"2026-02-19T05:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.296043 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.316968 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.340362 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.364171 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.379738 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.379792 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.379810 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.379837 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.379859 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:39Z","lastTransitionTime":"2026-02-19T05:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.383023 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.402043 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.483421 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.483485 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.483502 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.483528 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.483548 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:39Z","lastTransitionTime":"2026-02-19T05:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.498851 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6"] Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.499631 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.502877 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.503141 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.526458 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.548223 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.568213 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.586578 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.586817 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.586977 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.587121 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.587255 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:39Z","lastTransitionTime":"2026-02-19T05:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.587941 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.615803 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.635691 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.655458 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:50:04.181439461 +0000 UTC Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.659424 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.668219 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.668283 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.668349 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.668530 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvjcr\" (UniqueName: \"kubernetes.io/projected/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-kube-api-access-wvjcr\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.681738 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.691473 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.691545 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.691566 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.691593 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.691617 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:39Z","lastTransitionTime":"2026-02-19T05:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.720385 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7aac08f0ddde5e37715e32e986221e9f8220e5afd35d75937db0658c55c7f25b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"message\\\":\\\"handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:36.450683 6353 factory.go:656] Stopping watch factory\\\\nI0219 05:25:36.450684 6353 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:36.450691 6353 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:36.450703 6353 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:36.450745 6353 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.450824 6353 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451051 6353 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451689 6353 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:36.451824 6353 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.451966 6353 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:25:36.452179 6353 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.740725 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.756152 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.769985 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.770040 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.770077 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.770119 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvjcr\" (UniqueName: \"kubernetes.io/projected/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-kube-api-access-wvjcr\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.771361 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.771800 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.776361 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.780107 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.795657 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.795723 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.795791 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.795852 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.795875 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:39Z","lastTransitionTime":"2026-02-19T05:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.797608 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.799902 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvjcr\" (UniqueName: \"kubernetes.io/projected/fa645bc5-8cc3-45bc-be2e-7cf7d53abba0-kube-api-access-wvjcr\") pod \"ovnkube-control-plane-749d76644c-gncl6\" (UID: \"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.820813 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.821630 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:39Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:39 crc kubenswrapper[5012]: W0219 05:25:39.841392 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa645bc5_8cc3_45bc_be2e_7cf7d53abba0.slice/crio-19efa70e6545f17d8ca7b859d7d636406c8ffb71b4e80d68399592e11f46a8a3 WatchSource:0}: Error finding container 19efa70e6545f17d8ca7b859d7d636406c8ffb71b4e80d68399592e11f46a8a3: Status 404 returned error can't find the container with id 19efa70e6545f17d8ca7b859d7d636406c8ffb71b4e80d68399592e11f46a8a3 Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.899729 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.900529 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.900556 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.900591 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:39 crc kubenswrapper[5012]: I0219 05:25:39.900643 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:39Z","lastTransitionTime":"2026-02-19T05:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.004724 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.004785 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.004799 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.004823 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.004837 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.107468 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.107528 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.107548 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.107576 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.107596 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.117871 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/1.log" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.130160 5012 scope.go:117] "RemoveContainer" containerID="a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.130456 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.132099 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" event={"ID":"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0","Type":"ContainerStarted","Data":"8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.132192 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" event={"ID":"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0","Type":"ContainerStarted","Data":"19efa70e6545f17d8ca7b859d7d636406c8ffb71b4e80d68399592e11f46a8a3"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.151850 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.166419 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.187792 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.204439 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.210954 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.211006 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.211025 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.211050 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.211068 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.218853 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.241024 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.259648 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.277632 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.299577 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.314291 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.314370 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.314389 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.314416 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.314437 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.322929 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.340388 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.355258 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.378579 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.417349 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.417409 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.417424 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.417445 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.417460 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.427236 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.478936 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.479187 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:25:56.479146955 +0000 UTC m=+52.512469524 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.479346 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.479457 5012 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.479598 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:56.479568685 +0000 UTC m=+52.512891294 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.521191 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.521245 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.521258 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.521288 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.521322 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.580626 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.580721 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.580805 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.581013 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.581042 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.581063 5012 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.581175 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:56.581151414 +0000 UTC m=+52.614474023 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.581231 5012 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.581350 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.581387 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.581412 5012 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.581427 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:56.58138861 +0000 UTC m=+52.614711229 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.581475 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:56.581453682 +0000 UTC m=+52.614776291 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.624335 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.624397 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.624415 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.624441 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.624459 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.636281 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-q5cb2"] Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.637021 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-sh856"] Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.637244 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.637389 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.637517 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.641563 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.641940 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.648374 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.649227 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.655647 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 23:57:43.120829457 +0000 UTC Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.664919 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.681490 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf7wt\" (UniqueName: \"kubernetes.io/projected/6e445e06-98fd-4fc2-b480-58ddf368aeb6-kube-api-access-gf7wt\") pod \"node-ca-sh856\" (UID: \"6e445e06-98fd-4fc2-b480-58ddf368aeb6\") " pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.681567 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e445e06-98fd-4fc2-b480-58ddf368aeb6-host\") pod \"node-ca-sh856\" (UID: \"6e445e06-98fd-4fc2-b480-58ddf368aeb6\") " pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.681610 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.681724 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6e445e06-98fd-4fc2-b480-58ddf368aeb6-serviceca\") pod \"node-ca-sh856\" (UID: \"6e445e06-98fd-4fc2-b480-58ddf368aeb6\") " pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.681796 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7whbb\" (UniqueName: \"kubernetes.io/projected/2e231950-a365-4a82-9481-05fdac171449-kube-api-access-7whbb\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.683584 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.701257 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.702559 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.702665 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.702755 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.702587 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.703082 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.703772 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.724019 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.728677 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.728744 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.728764 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.728794 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.728819 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.743443 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.760262 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.775497 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.783165 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf7wt\" (UniqueName: \"kubernetes.io/projected/6e445e06-98fd-4fc2-b480-58ddf368aeb6-kube-api-access-gf7wt\") pod \"node-ca-sh856\" (UID: \"6e445e06-98fd-4fc2-b480-58ddf368aeb6\") " pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.783625 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e445e06-98fd-4fc2-b480-58ddf368aeb6-host\") pod \"node-ca-sh856\" (UID: \"6e445e06-98fd-4fc2-b480-58ddf368aeb6\") " pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.783811 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.783991 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6e445e06-98fd-4fc2-b480-58ddf368aeb6-serviceca\") pod \"node-ca-sh856\" (UID: \"6e445e06-98fd-4fc2-b480-58ddf368aeb6\") " pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.784156 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7whbb\" (UniqueName: \"kubernetes.io/projected/2e231950-a365-4a82-9481-05fdac171449-kube-api-access-7whbb\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.784066 5012 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.784612 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs podName:2e231950-a365-4a82-9481-05fdac171449 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:41.284582049 +0000 UTC m=+37.317904648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs") pod "network-metrics-daemon-q5cb2" (UID: "2e231950-a365-4a82-9481-05fdac171449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.783819 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e445e06-98fd-4fc2-b480-58ddf368aeb6-host\") pod \"node-ca-sh856\" (UID: \"6e445e06-98fd-4fc2-b480-58ddf368aeb6\") " pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.786022 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6e445e06-98fd-4fc2-b480-58ddf368aeb6-serviceca\") pod \"node-ca-sh856\" (UID: \"6e445e06-98fd-4fc2-b480-58ddf368aeb6\") " pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.793547 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.810874 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7whbb\" (UniqueName: \"kubernetes.io/projected/2e231950-a365-4a82-9481-05fdac171449-kube-api-access-7whbb\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.813812 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf7wt\" (UniqueName: \"kubernetes.io/projected/6e445e06-98fd-4fc2-b480-58ddf368aeb6-kube-api-access-gf7wt\") pod \"node-ca-sh856\" (UID: \"6e445e06-98fd-4fc2-b480-58ddf368aeb6\") " pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.816181 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.832104 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.832145 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.832165 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.832191 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.832210 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.834092 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.856613 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.877025 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.879233 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.879297 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.879353 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.879381 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.879434 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.896696 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.898898 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.902873 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.902959 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.902987 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.903015 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.903032 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.918082 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.920401 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.923575 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.923626 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.923645 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.923707 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.923727 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.942663 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.945700 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.947693 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.947748 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.947764 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.947789 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.947806 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.963080 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-sh856" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.963087 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.967272 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.971935 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.971990 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.972009 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.972036 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.972056 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:40 crc kubenswrapper[5012]: W0219 05:25:40.988091 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e445e06_98fd_4fc2_b480_58ddf368aeb6.slice/crio-c7d972296be7e71ee968839967902ffa3fdf68ae42fd65e0dd8a0ef74432c0c6 WatchSource:0}: Error finding container c7d972296be7e71ee968839967902ffa3fdf68ae42fd65e0dd8a0ef74432c0c6: Status 404 returned error can't find the container with id c7d972296be7e71ee968839967902ffa3fdf68ae42fd65e0dd8a0ef74432c0c6 Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.989447 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.993357 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:40Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:40 crc kubenswrapper[5012]: E0219 05:25:40.993802 5012 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.997515 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.997759 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.997789 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.997819 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:40 crc kubenswrapper[5012]: I0219 05:25:40.997842 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:40Z","lastTransitionTime":"2026-02-19T05:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.012418 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.035444 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.055971 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.081454 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.102543 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.102609 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.102627 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.102653 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.102673 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:41Z","lastTransitionTime":"2026-02-19T05:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.104228 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.113543 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.122021 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.138923 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" event={"ID":"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0","Type":"ContainerStarted","Data":"8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.141398 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-sh856" event={"ID":"6e445e06-98fd-4fc2-b480-58ddf368aeb6","Type":"ContainerStarted","Data":"c7d972296be7e71ee968839967902ffa3fdf68ae42fd65e0dd8a0ef74432c0c6"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.147645 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.160453 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.177088 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.191864 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.205358 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.205397 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.205420 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.205443 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.205458 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:41Z","lastTransitionTime":"2026-02-19T05:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.207552 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.224799 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.238319 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.255959 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.274885 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.287023 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.290822 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:41 crc kubenswrapper[5012]: E0219 05:25:41.291032 5012 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:41 crc kubenswrapper[5012]: E0219 05:25:41.291122 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs podName:2e231950-a365-4a82-9481-05fdac171449 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:42.291097499 +0000 UTC m=+38.324420078 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs") pod "network-metrics-daemon-q5cb2" (UID: "2e231950-a365-4a82-9481-05fdac171449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.302578 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.309326 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.309375 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.309390 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.309413 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.309429 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:41Z","lastTransitionTime":"2026-02-19T05:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.318862 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.337162 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.352125 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.369218 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.386935 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.405805 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.413084 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.413142 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.413161 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.413189 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.413211 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:41Z","lastTransitionTime":"2026-02-19T05:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.426092 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.442832 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.461397 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.492089 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.509400 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.516796 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.516855 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.516876 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.516902 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.516921 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:41Z","lastTransitionTime":"2026-02-19T05:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.526390 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.548482 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:41Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.620114 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.620158 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.620167 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.620183 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.620194 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:41Z","lastTransitionTime":"2026-02-19T05:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.656047 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:46:55.879408546 +0000 UTC Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.724403 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.724487 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.724510 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.724543 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.724564 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:41Z","lastTransitionTime":"2026-02-19T05:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.828481 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.828550 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.828570 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.828600 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.828620 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:41Z","lastTransitionTime":"2026-02-19T05:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.931956 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.932062 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.932092 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.932135 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:41 crc kubenswrapper[5012]: I0219 05:25:41.932163 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:41Z","lastTransitionTime":"2026-02-19T05:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.035687 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.035760 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.035794 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.035825 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.035851 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:42Z","lastTransitionTime":"2026-02-19T05:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.139919 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.139973 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.139985 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.140007 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.140026 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:42Z","lastTransitionTime":"2026-02-19T05:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.146438 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-sh856" event={"ID":"6e445e06-98fd-4fc2-b480-58ddf368aeb6","Type":"ContainerStarted","Data":"6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.174412 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.190973 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.211850 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.232952 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.243709 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.243770 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.243785 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.243811 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.243830 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:42Z","lastTransitionTime":"2026-02-19T05:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.254922 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.275884 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.292046 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.303267 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:42 crc kubenswrapper[5012]: E0219 05:25:42.303489 5012 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:42 crc kubenswrapper[5012]: E0219 05:25:42.303599 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs podName:2e231950-a365-4a82-9481-05fdac171449 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:44.303570614 +0000 UTC m=+40.336893193 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs") pod "network-metrics-daemon-q5cb2" (UID: "2e231950-a365-4a82-9481-05fdac171449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.309490 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.329657 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.347299 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.347412 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.347433 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.347464 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.347486 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:42Z","lastTransitionTime":"2026-02-19T05:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.364568 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.381156 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.398081 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.412058 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.429333 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.451247 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.451346 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.451367 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.451405 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.451425 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:42Z","lastTransitionTime":"2026-02-19T05:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.452294 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.474435 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:42Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.555062 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.555126 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.555144 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.555176 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.555194 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:42Z","lastTransitionTime":"2026-02-19T05:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.656288 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 17:32:05.742562111 +0000 UTC Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.659065 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.659131 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.659151 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.659181 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.659205 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:42Z","lastTransitionTime":"2026-02-19T05:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.703074 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.703142 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.703137 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:42 crc kubenswrapper[5012]: E0219 05:25:42.703349 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.703394 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:42 crc kubenswrapper[5012]: E0219 05:25:42.703550 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:25:42 crc kubenswrapper[5012]: E0219 05:25:42.703730 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:42 crc kubenswrapper[5012]: E0219 05:25:42.703961 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.762888 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.762955 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.762973 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.763001 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.763027 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:42Z","lastTransitionTime":"2026-02-19T05:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.866887 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.866951 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.866970 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.867001 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.867022 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:42Z","lastTransitionTime":"2026-02-19T05:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.970920 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.971008 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.971027 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.971055 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:42 crc kubenswrapper[5012]: I0219 05:25:42.971077 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:42Z","lastTransitionTime":"2026-02-19T05:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.074663 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.075444 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.075482 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.075511 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.075533 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:43Z","lastTransitionTime":"2026-02-19T05:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.179438 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.180179 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.180361 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.180510 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.180634 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:43Z","lastTransitionTime":"2026-02-19T05:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.284448 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.284674 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.284808 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.284970 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.285373 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:43Z","lastTransitionTime":"2026-02-19T05:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.388969 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.389043 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.389065 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.389099 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.389118 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:43Z","lastTransitionTime":"2026-02-19T05:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.492880 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.492965 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.492985 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.493014 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.493036 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:43Z","lastTransitionTime":"2026-02-19T05:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.596491 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.596557 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.596576 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.596603 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.596622 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:43Z","lastTransitionTime":"2026-02-19T05:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.656766 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 09:53:34.171894521 +0000 UTC Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.699608 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.699673 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.699694 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.699720 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.699737 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:43Z","lastTransitionTime":"2026-02-19T05:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.803407 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.803481 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.803504 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.803536 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.803558 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:43Z","lastTransitionTime":"2026-02-19T05:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.906397 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.906460 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.906484 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.906518 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:43 crc kubenswrapper[5012]: I0219 05:25:43.906543 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:43Z","lastTransitionTime":"2026-02-19T05:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.010054 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.010115 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.010130 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.010153 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.010165 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:44Z","lastTransitionTime":"2026-02-19T05:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.114020 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.114082 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.114102 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.114132 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.114153 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:44Z","lastTransitionTime":"2026-02-19T05:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.217946 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.218007 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.218026 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.218051 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.218074 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:44Z","lastTransitionTime":"2026-02-19T05:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.321805 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.321869 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.321886 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.321912 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.321935 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:44Z","lastTransitionTime":"2026-02-19T05:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.327596 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:44 crc kubenswrapper[5012]: E0219 05:25:44.327849 5012 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:44 crc kubenswrapper[5012]: E0219 05:25:44.327964 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs podName:2e231950-a365-4a82-9481-05fdac171449 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:48.327934431 +0000 UTC m=+44.361257040 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs") pod "network-metrics-daemon-q5cb2" (UID: "2e231950-a365-4a82-9481-05fdac171449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.425093 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.425142 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.425154 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.425175 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.425190 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:44Z","lastTransitionTime":"2026-02-19T05:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.528649 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.528714 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.528731 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.528760 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.528778 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:44Z","lastTransitionTime":"2026-02-19T05:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.631996 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.632069 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.632091 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.632117 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.632136 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:44Z","lastTransitionTime":"2026-02-19T05:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.657357 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 09:52:40.495057394 +0000 UTC Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.702990 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.703158 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.703264 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:44 crc kubenswrapper[5012]: E0219 05:25:44.703182 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:44 crc kubenswrapper[5012]: E0219 05:25:44.703478 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.703516 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:44 crc kubenswrapper[5012]: E0219 05:25:44.703672 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:44 crc kubenswrapper[5012]: E0219 05:25:44.703780 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.734852 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.734919 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.734940 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.734964 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.734984 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:44Z","lastTransitionTime":"2026-02-19T05:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.738436 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.760356 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.781056 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.796840 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.816843 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.838013 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.839227 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.839347 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.839368 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.839395 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.839416 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:44Z","lastTransitionTime":"2026-02-19T05:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.856875 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.878072 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.898589 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.919182 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.940266 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.942789 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.942842 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.942862 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.942926 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.942948 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:44Z","lastTransitionTime":"2026-02-19T05:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.961025 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:44 crc kubenswrapper[5012]: I0219 05:25:44.984808 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:44Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.003087 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:45Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.020505 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:45Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.045961 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:45Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.046373 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.046446 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.046466 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.046500 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.046519 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:45Z","lastTransitionTime":"2026-02-19T05:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.150107 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.150811 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.150859 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.150893 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.150915 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:45Z","lastTransitionTime":"2026-02-19T05:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.254451 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.254523 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.254567 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.254595 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.254617 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:45Z","lastTransitionTime":"2026-02-19T05:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.357797 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.357858 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.357875 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.357928 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.357974 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:45Z","lastTransitionTime":"2026-02-19T05:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.462202 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.462270 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.462286 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.462348 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.462372 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:45Z","lastTransitionTime":"2026-02-19T05:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.565904 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.565976 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.565999 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.566042 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.566069 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:45Z","lastTransitionTime":"2026-02-19T05:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.658371 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:46:17.807316512 +0000 UTC Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.670133 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.670214 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.670236 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.670266 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.670287 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:45Z","lastTransitionTime":"2026-02-19T05:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.773820 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.773877 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.773894 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.773923 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.773942 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:45Z","lastTransitionTime":"2026-02-19T05:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.877293 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.877392 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.877416 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.877446 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.877464 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:45Z","lastTransitionTime":"2026-02-19T05:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.980760 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.981064 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.981199 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.981380 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:45 crc kubenswrapper[5012]: I0219 05:25:45.981565 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:45Z","lastTransitionTime":"2026-02-19T05:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.084628 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.084669 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.084678 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.084693 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.084703 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:46Z","lastTransitionTime":"2026-02-19T05:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.187759 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.188565 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.188622 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.188655 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.188680 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:46Z","lastTransitionTime":"2026-02-19T05:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.291900 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.291969 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.291983 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.292000 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.292012 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:46Z","lastTransitionTime":"2026-02-19T05:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.395765 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.395829 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.395846 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.395873 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.395895 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:46Z","lastTransitionTime":"2026-02-19T05:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.499981 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.500046 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.500065 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.500092 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.500110 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:46Z","lastTransitionTime":"2026-02-19T05:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.603900 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.603986 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.604005 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.604038 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.604066 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:46Z","lastTransitionTime":"2026-02-19T05:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.659026 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:40:26.416354702 +0000 UTC Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.702711 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.702793 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.702882 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:46 crc kubenswrapper[5012]: E0219 05:25:46.702912 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:46 crc kubenswrapper[5012]: E0219 05:25:46.703043 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:46 crc kubenswrapper[5012]: E0219 05:25:46.703294 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.703570 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:46 crc kubenswrapper[5012]: E0219 05:25:46.703746 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.706456 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.706478 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.706486 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.706501 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.706512 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:46Z","lastTransitionTime":"2026-02-19T05:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.810361 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.810686 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.810884 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.811127 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.811568 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:46Z","lastTransitionTime":"2026-02-19T05:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.914815 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.914903 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.914920 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.914944 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:46 crc kubenswrapper[5012]: I0219 05:25:46.914962 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:46Z","lastTransitionTime":"2026-02-19T05:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.018426 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.018492 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.018511 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.018538 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.018557 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:47Z","lastTransitionTime":"2026-02-19T05:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.122109 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.122207 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.122229 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.122262 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.122282 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:47Z","lastTransitionTime":"2026-02-19T05:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.225739 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.225809 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.225834 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.225861 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.225878 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:47Z","lastTransitionTime":"2026-02-19T05:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.329424 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.329480 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.329501 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.329527 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.329545 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:47Z","lastTransitionTime":"2026-02-19T05:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.432555 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.432614 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.432627 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.432655 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.432667 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:47Z","lastTransitionTime":"2026-02-19T05:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.535745 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.535790 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.535802 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.535823 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.535836 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:47Z","lastTransitionTime":"2026-02-19T05:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.638659 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.638710 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.638734 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.638763 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.638788 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:47Z","lastTransitionTime":"2026-02-19T05:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.659437 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 15:01:01.889708936 +0000 UTC Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.741846 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.741912 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.741923 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.741947 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.741960 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:47Z","lastTransitionTime":"2026-02-19T05:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.845026 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.845102 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.845126 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.845159 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.845182 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:47Z","lastTransitionTime":"2026-02-19T05:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.948682 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.948779 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.948803 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.948836 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:47 crc kubenswrapper[5012]: I0219 05:25:47.948860 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:47Z","lastTransitionTime":"2026-02-19T05:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.051940 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.052004 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.052023 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.052061 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.052082 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:48Z","lastTransitionTime":"2026-02-19T05:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.155677 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.155757 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.155776 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.155809 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.155829 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:48Z","lastTransitionTime":"2026-02-19T05:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.259533 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.259609 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.259630 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.259661 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.259684 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:48Z","lastTransitionTime":"2026-02-19T05:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.363109 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.363181 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.363201 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.363231 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.363254 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:48Z","lastTransitionTime":"2026-02-19T05:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.378021 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:48 crc kubenswrapper[5012]: E0219 05:25:48.378256 5012 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:48 crc kubenswrapper[5012]: E0219 05:25:48.378404 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs podName:2e231950-a365-4a82-9481-05fdac171449 nodeName:}" failed. No retries permitted until 2026-02-19 05:25:56.378371265 +0000 UTC m=+52.411693874 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs") pod "network-metrics-daemon-q5cb2" (UID: "2e231950-a365-4a82-9481-05fdac171449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.467678 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.467745 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.467763 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.467794 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.467813 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:48Z","lastTransitionTime":"2026-02-19T05:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.571093 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.571160 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.571183 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.571447 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.571473 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:48Z","lastTransitionTime":"2026-02-19T05:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.660221 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 23:47:51.643934886 +0000 UTC Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.674841 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.674888 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.674904 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.674929 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.674948 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:48Z","lastTransitionTime":"2026-02-19T05:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.703568 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.703643 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.703705 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:48 crc kubenswrapper[5012]: E0219 05:25:48.703755 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:48 crc kubenswrapper[5012]: E0219 05:25:48.703937 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.703956 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:48 crc kubenswrapper[5012]: E0219 05:25:48.704101 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:48 crc kubenswrapper[5012]: E0219 05:25:48.704208 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.778631 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.778727 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.778745 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.778803 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.778822 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:48Z","lastTransitionTime":"2026-02-19T05:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.881937 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.881996 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.882016 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.882039 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.882059 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:48Z","lastTransitionTime":"2026-02-19T05:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.986210 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.986361 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.986383 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.986450 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:48 crc kubenswrapper[5012]: I0219 05:25:48.986478 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:48Z","lastTransitionTime":"2026-02-19T05:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.090075 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.090138 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.090157 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.090183 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.090201 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:49Z","lastTransitionTime":"2026-02-19T05:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.192714 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.192773 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.192794 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.192821 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.192839 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:49Z","lastTransitionTime":"2026-02-19T05:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.296750 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.296820 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.296844 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.296878 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.296907 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:49Z","lastTransitionTime":"2026-02-19T05:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.400585 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.400642 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.400659 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.400685 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.400706 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:49Z","lastTransitionTime":"2026-02-19T05:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.503968 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.504052 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.504073 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.504102 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.504122 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:49Z","lastTransitionTime":"2026-02-19T05:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.607677 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.607748 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.607765 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.607794 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.607816 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:49Z","lastTransitionTime":"2026-02-19T05:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.660576 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 10:44:21.960285625 +0000 UTC Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.710663 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.710717 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.710733 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.710757 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.710775 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:49Z","lastTransitionTime":"2026-02-19T05:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.813376 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.813446 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.813474 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.813506 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.813530 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:49Z","lastTransitionTime":"2026-02-19T05:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.917902 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.917963 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.918024 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.918052 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:49 crc kubenswrapper[5012]: I0219 05:25:49.918070 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:49Z","lastTransitionTime":"2026-02-19T05:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.021422 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.021487 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.021505 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.021533 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.021552 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:50Z","lastTransitionTime":"2026-02-19T05:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.125400 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.125473 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.125493 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.125521 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.125543 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:50Z","lastTransitionTime":"2026-02-19T05:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.228679 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.228740 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.228755 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.228777 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.228793 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:50Z","lastTransitionTime":"2026-02-19T05:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.332355 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.332405 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.332417 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.332433 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.332445 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:50Z","lastTransitionTime":"2026-02-19T05:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.436078 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.436163 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.436181 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.436208 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.436255 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:50Z","lastTransitionTime":"2026-02-19T05:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.539545 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.539687 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.539708 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.539734 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.539753 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:50Z","lastTransitionTime":"2026-02-19T05:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.642429 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.642469 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.642478 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.642494 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.642504 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:50Z","lastTransitionTime":"2026-02-19T05:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.660720 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 07:34:24.460995385 +0000 UTC Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.702492 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.702581 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.702590 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:50 crc kubenswrapper[5012]: E0219 05:25:50.702746 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.702830 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:50 crc kubenswrapper[5012]: E0219 05:25:50.702931 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:25:50 crc kubenswrapper[5012]: E0219 05:25:50.703194 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:50 crc kubenswrapper[5012]: E0219 05:25:50.703399 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.745989 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.746058 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.746076 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.746104 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.746123 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:50Z","lastTransitionTime":"2026-02-19T05:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.849742 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.849809 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.849827 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.849858 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.849878 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:50Z","lastTransitionTime":"2026-02-19T05:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.953230 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.953377 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.953408 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.953440 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:50 crc kubenswrapper[5012]: I0219 05:25:50.953464 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:50Z","lastTransitionTime":"2026-02-19T05:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.045466 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.045542 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.045561 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.045591 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.045611 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: E0219 05:25:51.066604 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:51Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.072113 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.072208 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.072254 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.072282 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.072340 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: E0219 05:25:51.094071 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:51Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.099989 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.100034 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.100048 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.100069 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.100081 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: E0219 05:25:51.119430 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:51Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.124205 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.124258 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.124275 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.124326 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.124346 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: E0219 05:25:51.145542 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:51Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.150536 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.150612 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.150640 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.150672 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.150695 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: E0219 05:25:51.174728 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:51Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:51 crc kubenswrapper[5012]: E0219 05:25:51.174938 5012 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.177186 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.177268 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.177295 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.177372 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.177400 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.280197 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.280236 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.280247 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.280263 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.280276 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.383533 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.383596 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.383615 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.383642 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.383661 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.486814 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.486925 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.486951 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.486985 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.487009 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.591077 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.591172 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.591191 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.591225 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.591248 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.661083 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 07:22:04.77977049 +0000 UTC Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.696991 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.697104 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.697130 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.697177 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.697198 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.800814 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.800881 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.800900 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.800926 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.800944 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.904055 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.904123 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.904143 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.904173 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:51 crc kubenswrapper[5012]: I0219 05:25:51.904195 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:51Z","lastTransitionTime":"2026-02-19T05:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.007939 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.007998 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.008015 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.008040 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.008057 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:52Z","lastTransitionTime":"2026-02-19T05:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.111967 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.112096 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.112174 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.112211 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.112238 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:52Z","lastTransitionTime":"2026-02-19T05:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.215344 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.215405 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.215423 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.215451 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.215469 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:52Z","lastTransitionTime":"2026-02-19T05:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.319410 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.319468 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.319485 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.319511 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.319529 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:52Z","lastTransitionTime":"2026-02-19T05:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.422945 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.423007 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.423024 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.423049 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.423072 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:52Z","lastTransitionTime":"2026-02-19T05:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.526965 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.527026 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.527050 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.527082 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.527104 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:52Z","lastTransitionTime":"2026-02-19T05:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.630355 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.630446 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.630471 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.630504 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.630525 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:52Z","lastTransitionTime":"2026-02-19T05:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.662228 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 05:46:38.149322131 +0000 UTC Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.702267 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.702800 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.702861 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.702826 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:52 crc kubenswrapper[5012]: E0219 05:25:52.703054 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.703422 5012 scope.go:117] "RemoveContainer" containerID="a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c" Feb 19 05:25:52 crc kubenswrapper[5012]: E0219 05:25:52.703916 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:25:52 crc kubenswrapper[5012]: E0219 05:25:52.704130 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:52 crc kubenswrapper[5012]: E0219 05:25:52.704141 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.734641 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.734697 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.734713 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.734742 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.734761 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:52Z","lastTransitionTime":"2026-02-19T05:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.838279 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.839127 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.839151 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.839184 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.839207 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:52Z","lastTransitionTime":"2026-02-19T05:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.943277 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.943369 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.943393 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.943420 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:52 crc kubenswrapper[5012]: I0219 05:25:52.943438 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:52Z","lastTransitionTime":"2026-02-19T05:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.046834 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.046887 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.046897 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.046914 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.046926 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:53Z","lastTransitionTime":"2026-02-19T05:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.150566 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.150636 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.150667 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.150701 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.150725 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:53Z","lastTransitionTime":"2026-02-19T05:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.190073 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/1.log" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.194400 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.195121 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.253441 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.253487 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.253498 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.253517 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.253532 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:53Z","lastTransitionTime":"2026-02-19T05:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.255167 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.274249 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.290054 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.304234 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.318702 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.330480 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.346424 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.355948 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.355973 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.355981 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.355998 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.356009 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:53Z","lastTransitionTime":"2026-02-19T05:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.364639 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.381452 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.395852 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.407217 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.425959 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.438918 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.456346 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.458348 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.458419 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.458431 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.458449 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.458461 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:53Z","lastTransitionTime":"2026-02-19T05:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.473488 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.499982 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:53Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.561971 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.562014 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.562025 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.562045 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.562058 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:53Z","lastTransitionTime":"2026-02-19T05:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.663055 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 13:45:49.662748424 +0000 UTC Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.664392 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.664437 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.664450 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.664469 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.664482 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:53Z","lastTransitionTime":"2026-02-19T05:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.767262 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.767317 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.767326 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.767341 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.767351 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:53Z","lastTransitionTime":"2026-02-19T05:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.870919 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.870963 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.870974 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.870992 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.871005 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:53Z","lastTransitionTime":"2026-02-19T05:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.974614 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.974660 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.974672 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.974691 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:53 crc kubenswrapper[5012]: I0219 05:25:53.974707 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:53Z","lastTransitionTime":"2026-02-19T05:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.076792 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.076874 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.076895 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.076927 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.076955 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:54Z","lastTransitionTime":"2026-02-19T05:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.180436 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.180507 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.180530 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.180557 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.180575 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:54Z","lastTransitionTime":"2026-02-19T05:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.201215 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/2.log" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.202289 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/1.log" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.205977 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f" exitCode=1 Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.206053 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.206115 5012 scope.go:117] "RemoveContainer" containerID="a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.206781 5012 scope.go:117] "RemoveContainer" containerID="13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f" Feb 19 05:25:54 crc kubenswrapper[5012]: E0219 05:25:54.206967 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.232209 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.244961 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.259544 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.275762 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.283526 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.283751 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.283901 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.284058 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.284199 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:54Z","lastTransitionTime":"2026-02-19T05:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.297887 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.315740 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.332974 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.354072 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.377573 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.387924 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.387973 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.387992 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.388019 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.388038 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:54Z","lastTransitionTime":"2026-02-19T05:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.401685 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:53Z\\\",\\\"message\\\":\\\"dler 4\\\\nI0219 05:25:53.703172 6818 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 05:25:53.703192 6818 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 05:25:53.703216 6818 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:53.703233 6818 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:53.703255 6818 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:53.703272 6818 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:53.703270 6818 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 05:25:53.703293 6818 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:53.703340 6818 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 05:25:53.703336 6818 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:53.703354 6818 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:53.703357 6818 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:53.703381 6818 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:53.703389 6818 factory.go:656] Stopping watch factory\\\\nI0219 05:25:53.703404 6818 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:53.703415 6818 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.423217 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.439569 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.456024 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.475455 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.491520 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.491601 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.491621 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.491650 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.491670 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:54Z","lastTransitionTime":"2026-02-19T05:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.496758 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.519528 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.594695 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.594750 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.594766 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.594791 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.594812 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:54Z","lastTransitionTime":"2026-02-19T05:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.664279 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:53:26.037991606 +0000 UTC Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.698181 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.698242 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.698259 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.698284 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.698334 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:54Z","lastTransitionTime":"2026-02-19T05:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.702695 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.702756 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.702789 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:54 crc kubenswrapper[5012]: E0219 05:25:54.702952 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.702978 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:54 crc kubenswrapper[5012]: E0219 05:25:54.703133 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:25:54 crc kubenswrapper[5012]: E0219 05:25:54.703295 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:54 crc kubenswrapper[5012]: E0219 05:25:54.703480 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.722676 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.742994 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.761841 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.783908 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.796881 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.801973 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.802031 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.802048 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.802073 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.802096 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:54Z","lastTransitionTime":"2026-02-19T05:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.810121 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.812116 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.834828 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.858585 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.877068 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.892151 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.905375 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.905458 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.905482 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.905514 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.905533 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:54Z","lastTransitionTime":"2026-02-19T05:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.915440 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.948846 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:53Z\\\",\\\"message\\\":\\\"dler 4\\\\nI0219 05:25:53.703172 6818 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 05:25:53.703192 6818 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 05:25:53.703216 6818 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:53.703233 6818 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:53.703255 6818 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:53.703272 6818 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:53.703270 6818 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 05:25:53.703293 6818 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:53.703340 6818 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 05:25:53.703336 6818 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:53.703354 6818 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:53.703357 6818 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:53.703381 6818 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:53.703389 6818 factory.go:656] Stopping watch factory\\\\nI0219 05:25:53.703404 6818 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:53.703415 6818 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.972732 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:54 crc kubenswrapper[5012]: I0219 05:25:54.996751 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:54Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.011452 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.011558 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.011577 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.011636 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.011657 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:55Z","lastTransitionTime":"2026-02-19T05:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.024809 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.042708 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.061841 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.076115 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9e62de-d3da-441f-872c-041155358f5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e98f19db4c5c9195d053af37d083f2878b34c43f6dde196474accc5b50e889f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9240f7aeba9def91f54641b0bf6f8d9e6a8e5eb8f7e46b910372c425616e5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74056ba46dcbd9e83d8283e28be385218df1a2a25007e74cc991865249c81eb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.096805 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.114158 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.116446 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.116500 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.116542 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.116572 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.116590 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:55Z","lastTransitionTime":"2026-02-19T05:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.130185 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.148503 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.164992 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.181927 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.207463 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.214049 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/2.log" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.219079 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.219152 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.219181 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.219212 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.219238 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:55Z","lastTransitionTime":"2026-02-19T05:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.219795 5012 scope.go:117] "RemoveContainer" containerID="13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f" Feb 19 05:25:55 crc kubenswrapper[5012]: E0219 05:25:55.220077 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.224738 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.239778 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.274584 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.309588 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.322157 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.322199 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.322211 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.322242 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.322256 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:55Z","lastTransitionTime":"2026-02-19T05:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.327396 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.339768 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.353005 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.365840 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.387795 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5621cb116e4632a58a0bf56d32bfebf978bfa7754a301ea53215c3680f6c52c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:38Z\\\",\\\"message\\\":\\\"qos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:25:38.177464 6522 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:38.177510 6522 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:38.177553 6522 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:25:38.177591 6522 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:38.177654 6522 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:38.177660 6522 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:38.177715 6522 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:38.177739 6522 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:38.177779 6522 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:25:38.177833 6522 factory.go:656] Stopping watch factory\\\\nI0219 05:25:38.177845 6522 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:38.177868 6522 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:38.177871 6522 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:38.177875 6522 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:38.177822 6522 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:25:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:53Z\\\",\\\"message\\\":\\\"dler 4\\\\nI0219 05:25:53.703172 6818 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 05:25:53.703192 6818 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 05:25:53.703216 6818 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:53.703233 6818 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:53.703255 6818 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:53.703272 6818 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:53.703270 6818 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 05:25:53.703293 6818 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:53.703340 6818 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 05:25:53.703336 6818 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:53.703354 6818 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:53.703357 6818 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:53.703381 6818 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:53.703389 6818 factory.go:656] Stopping watch factory\\\\nI0219 05:25:53.703404 6818 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:53.703415 6818 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.404791 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.419145 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.424459 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.424495 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.424507 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.424527 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.424543 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:55Z","lastTransitionTime":"2026-02-19T05:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.430396 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.442208 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.453553 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.465265 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.474072 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.489677 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.506621 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.524144 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:53Z\\\",\\\"message\\\":\\\"dler 4\\\\nI0219 05:25:53.703172 6818 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 05:25:53.703192 6818 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 05:25:53.703216 6818 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:53.703233 6818 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:53.703255 6818 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:53.703272 6818 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:53.703270 6818 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 05:25:53.703293 6818 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:53.703340 6818 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 05:25:53.703336 6818 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:53.703354 6818 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:53.703357 6818 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:53.703381 6818 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:53.703389 6818 factory.go:656] Stopping watch factory\\\\nI0219 05:25:53.703404 6818 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:53.703415 6818 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.526803 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.526846 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.526859 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.526875 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.526889 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:55Z","lastTransitionTime":"2026-02-19T05:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.538093 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9e62de-d3da-441f-872c-041155358f5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e98f19db4c5c9195d053af37d083f2878b34c43f6dde196474accc5b50e889f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9240f7aeba9def91f54641b0bf6f8d9e6a8e5eb8f7e46b910372c425616e5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74056ba46dcbd9e83d8283e28be385218df1a2a25007e74cc991865249c81eb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.554386 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.563899 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.573694 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.590518 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.605938 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.621647 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:25:55Z is after 2025-08-24T17:21:41Z" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.629242 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.629295 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.629343 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.629366 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.629385 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:55Z","lastTransitionTime":"2026-02-19T05:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.664471 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 16:05:04.018209423 +0000 UTC Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.732712 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.732769 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.732789 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.732815 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.732833 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:55Z","lastTransitionTime":"2026-02-19T05:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.835776 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.835851 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.835876 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.835908 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.835931 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:55Z","lastTransitionTime":"2026-02-19T05:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.938472 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.938532 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.938554 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.938580 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:55 crc kubenswrapper[5012]: I0219 05:25:55.938602 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:55Z","lastTransitionTime":"2026-02-19T05:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.042267 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.042356 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.042376 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.042401 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.042420 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:56Z","lastTransitionTime":"2026-02-19T05:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.145992 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.146057 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.146074 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.146099 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.146118 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:56Z","lastTransitionTime":"2026-02-19T05:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.249102 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.249171 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.249195 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.249223 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.249247 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:56Z","lastTransitionTime":"2026-02-19T05:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.352839 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.352902 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.352923 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.352950 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.352967 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:56Z","lastTransitionTime":"2026-02-19T05:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.389465 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.389674 5012 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.389757 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs podName:2e231950-a365-4a82-9481-05fdac171449 nodeName:}" failed. No retries permitted until 2026-02-19 05:26:12.389735866 +0000 UTC m=+68.423058435 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs") pod "network-metrics-daemon-q5cb2" (UID: "2e231950-a365-4a82-9481-05fdac171449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.456012 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.456069 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.456093 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.456126 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.456149 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:56Z","lastTransitionTime":"2026-02-19T05:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.490908 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.491106 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:26:28.491070259 +0000 UTC m=+84.524392868 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.491354 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.491485 5012 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.491574 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:26:28.491556141 +0000 UTC m=+84.524878710 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.558404 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.558443 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.558454 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.558470 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.558480 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:56Z","lastTransitionTime":"2026-02-19T05:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.592569 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.592633 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.592678 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.592826 5012 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.592893 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:26:28.592876823 +0000 UTC m=+84.626199402 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.592906 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.592953 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.593004 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.593021 5012 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.593086 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 05:26:28.593066328 +0000 UTC m=+84.626388967 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.592969 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.593126 5012 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.593251 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 05:26:28.593212182 +0000 UTC m=+84.626534781 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.661694 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.661765 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.661785 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.661815 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.661838 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:56Z","lastTransitionTime":"2026-02-19T05:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.664799 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 05:34:19.379512092 +0000 UTC Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.702410 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.702771 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.702765 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.702960 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.702960 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.703128 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.703381 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:25:56 crc kubenswrapper[5012]: E0219 05:25:56.703513 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.764696 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.764815 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.764838 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.764862 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.764939 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:56Z","lastTransitionTime":"2026-02-19T05:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.868810 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.869226 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.869283 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.869364 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.869386 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:56Z","lastTransitionTime":"2026-02-19T05:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.973178 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.973242 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.973261 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.973286 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:56 crc kubenswrapper[5012]: I0219 05:25:56.973327 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:56Z","lastTransitionTime":"2026-02-19T05:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.076015 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.076089 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.076113 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.076139 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.076159 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:57Z","lastTransitionTime":"2026-02-19T05:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.179493 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.179583 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.179606 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.179635 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.179656 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:57Z","lastTransitionTime":"2026-02-19T05:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.282279 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.282362 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.282379 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.282417 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.282436 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:57Z","lastTransitionTime":"2026-02-19T05:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.385419 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.385482 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.385503 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.385530 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.385550 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:57Z","lastTransitionTime":"2026-02-19T05:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.488367 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.488431 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.488448 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.488477 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.488498 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:57Z","lastTransitionTime":"2026-02-19T05:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.590927 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.591009 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.591023 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.591075 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.591093 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:57Z","lastTransitionTime":"2026-02-19T05:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.665779 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:26:19.037775697 +0000 UTC Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.694376 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.694424 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.694435 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.694459 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.694471 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:57Z","lastTransitionTime":"2026-02-19T05:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.798086 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.798126 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.798136 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.798151 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.798164 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:57Z","lastTransitionTime":"2026-02-19T05:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.901241 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.901388 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.901409 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.901478 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:57 crc kubenswrapper[5012]: I0219 05:25:57.901496 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:57Z","lastTransitionTime":"2026-02-19T05:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.004692 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.004768 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.004791 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.004821 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.004845 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:58Z","lastTransitionTime":"2026-02-19T05:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.108964 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.109051 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.109066 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.109087 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.109101 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:58Z","lastTransitionTime":"2026-02-19T05:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.212181 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.212677 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.212878 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.213040 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.213170 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:58Z","lastTransitionTime":"2026-02-19T05:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.316685 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.316727 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.316741 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.316763 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.316777 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:58Z","lastTransitionTime":"2026-02-19T05:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.420395 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.420987 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.421185 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.421397 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.421584 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:58Z","lastTransitionTime":"2026-02-19T05:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.525580 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.525626 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.525637 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.525655 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.525667 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:58Z","lastTransitionTime":"2026-02-19T05:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.628716 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.628793 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.628813 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.628840 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.628857 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:58Z","lastTransitionTime":"2026-02-19T05:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.666400 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:01:21.933126863 +0000 UTC Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.702211 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.702247 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.702265 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.702492 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:25:58 crc kubenswrapper[5012]: E0219 05:25:58.702479 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:25:58 crc kubenswrapper[5012]: E0219 05:25:58.702623 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:25:58 crc kubenswrapper[5012]: E0219 05:25:58.702723 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:25:58 crc kubenswrapper[5012]: E0219 05:25:58.702818 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.732817 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.732887 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.732912 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.732945 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.732969 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:58Z","lastTransitionTime":"2026-02-19T05:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.836354 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.836404 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.836421 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.836449 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.836475 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:58Z","lastTransitionTime":"2026-02-19T05:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.940371 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.940472 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.940498 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.941014 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:58 crc kubenswrapper[5012]: I0219 05:25:58.941045 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:58Z","lastTransitionTime":"2026-02-19T05:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.043440 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.043498 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.043521 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.043549 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.043571 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:59Z","lastTransitionTime":"2026-02-19T05:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.147040 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.147105 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.147128 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.147159 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.147182 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:59Z","lastTransitionTime":"2026-02-19T05:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.249335 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.249395 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.249414 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.249453 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.249478 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:59Z","lastTransitionTime":"2026-02-19T05:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.352692 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.352762 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.352813 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.352842 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.352862 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:59Z","lastTransitionTime":"2026-02-19T05:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.455914 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.455966 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.455977 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.455997 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.456009 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:59Z","lastTransitionTime":"2026-02-19T05:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.559621 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.559680 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.559696 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.559722 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.559741 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:59Z","lastTransitionTime":"2026-02-19T05:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.663282 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.663388 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.663414 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.663447 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.663486 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:59Z","lastTransitionTime":"2026-02-19T05:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.667385 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 13:23:11.802605001 +0000 UTC Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.766185 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.766223 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.766240 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.766262 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.766281 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:59Z","lastTransitionTime":"2026-02-19T05:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.870041 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.870113 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.870132 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.870161 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.870182 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:59Z","lastTransitionTime":"2026-02-19T05:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.973680 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.973752 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.973775 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.973806 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:25:59 crc kubenswrapper[5012]: I0219 05:25:59.973828 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:25:59Z","lastTransitionTime":"2026-02-19T05:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.077250 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.077353 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.077372 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.077397 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.077415 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:00Z","lastTransitionTime":"2026-02-19T05:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.181670 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.181756 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.181819 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.181861 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.181890 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:00Z","lastTransitionTime":"2026-02-19T05:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.285019 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.285106 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.285126 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.285153 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.285172 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:00Z","lastTransitionTime":"2026-02-19T05:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.387953 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.388001 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.388012 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.388032 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.388044 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:00Z","lastTransitionTime":"2026-02-19T05:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.495700 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.495780 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.495845 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.496796 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.496873 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:00Z","lastTransitionTime":"2026-02-19T05:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.600405 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.600485 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.600508 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.600537 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.600558 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:00Z","lastTransitionTime":"2026-02-19T05:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.668166 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:48:42.895356016 +0000 UTC Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.701970 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.702005 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.702086 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.702190 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:00 crc kubenswrapper[5012]: E0219 05:26:00.702457 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:00 crc kubenswrapper[5012]: E0219 05:26:00.702885 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:00 crc kubenswrapper[5012]: E0219 05:26:00.703080 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:00 crc kubenswrapper[5012]: E0219 05:26:00.703156 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.704457 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.704562 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.704586 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.704654 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.704676 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:00Z","lastTransitionTime":"2026-02-19T05:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.807647 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.808075 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.808196 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.808420 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.808559 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:00Z","lastTransitionTime":"2026-02-19T05:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.911568 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.911644 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.911667 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.911695 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:00 crc kubenswrapper[5012]: I0219 05:26:00.911713 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:00Z","lastTransitionTime":"2026-02-19T05:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.014347 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.014408 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.014427 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.014453 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.014476 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.117165 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.117232 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.117258 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.117288 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.117369 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.220067 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.220126 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.220146 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.220176 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.220200 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.323355 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.323408 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.323426 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.323449 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.323468 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.426294 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.426374 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.426391 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.426415 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.426434 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.495575 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.495666 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.495683 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.495741 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.495759 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: E0219 05:26:01.518849 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:01Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.522783 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.522823 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.522841 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.522865 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.522882 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: E0219 05:26:01.539919 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:01Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.544513 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.544581 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.544606 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.544638 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.544664 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: E0219 05:26:01.560918 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:01Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.565072 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.565247 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.565391 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.565545 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.565666 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: E0219 05:26:01.584862 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:01Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.589422 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.589490 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.589510 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.589540 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.589564 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: E0219 05:26:01.605384 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:01Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:01 crc kubenswrapper[5012]: E0219 05:26:01.606217 5012 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.608167 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.608351 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.608459 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.608581 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.608714 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.669178 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 13:20:22.94611273 +0000 UTC Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.713477 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.713545 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.713564 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.713591 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.713609 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.816410 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.816476 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.816493 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.816520 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.816537 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.919875 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.919903 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.919911 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.919925 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:01 crc kubenswrapper[5012]: I0219 05:26:01.919936 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:01Z","lastTransitionTime":"2026-02-19T05:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.029865 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.029954 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.029981 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.030013 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.030039 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:02Z","lastTransitionTime":"2026-02-19T05:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.133000 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.133060 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.133077 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.133104 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.133125 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:02Z","lastTransitionTime":"2026-02-19T05:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.236031 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.236110 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.236132 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.236162 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.236181 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:02Z","lastTransitionTime":"2026-02-19T05:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.339090 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.339157 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.339175 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.339205 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.339223 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:02Z","lastTransitionTime":"2026-02-19T05:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.442018 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.442082 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.442103 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.442130 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.442147 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:02Z","lastTransitionTime":"2026-02-19T05:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.544979 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.545046 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.545067 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.545095 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.545116 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:02Z","lastTransitionTime":"2026-02-19T05:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.647727 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.647784 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.647794 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.647814 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.647830 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:02Z","lastTransitionTime":"2026-02-19T05:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.670287 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:43:53.883796046 +0000 UTC Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.702082 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.702128 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.702146 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:02 crc kubenswrapper[5012]: E0219 05:26:02.702246 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.702345 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:02 crc kubenswrapper[5012]: E0219 05:26:02.702579 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:02 crc kubenswrapper[5012]: E0219 05:26:02.702583 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:02 crc kubenswrapper[5012]: E0219 05:26:02.702642 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.750335 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.750378 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.750387 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.750406 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.750419 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:02Z","lastTransitionTime":"2026-02-19T05:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.853932 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.853984 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.854001 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.854028 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.854046 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:02Z","lastTransitionTime":"2026-02-19T05:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.957010 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.957060 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.957076 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.957099 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:02 crc kubenswrapper[5012]: I0219 05:26:02.957116 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:02Z","lastTransitionTime":"2026-02-19T05:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.059901 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.059955 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.059972 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.060002 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.060021 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:03Z","lastTransitionTime":"2026-02-19T05:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.163401 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.163467 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.163485 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.163512 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.163530 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:03Z","lastTransitionTime":"2026-02-19T05:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.266291 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.266378 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.266395 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.266421 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.266437 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:03Z","lastTransitionTime":"2026-02-19T05:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.369998 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.370065 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.370083 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.370109 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.370130 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:03Z","lastTransitionTime":"2026-02-19T05:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.472938 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.472977 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.472988 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.473004 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.473015 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:03Z","lastTransitionTime":"2026-02-19T05:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.575340 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.575402 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.575421 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.575445 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.575463 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:03Z","lastTransitionTime":"2026-02-19T05:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.671195 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 00:21:35.039953499 +0000 UTC Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.679121 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.679277 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.679350 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.679387 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.679411 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:03Z","lastTransitionTime":"2026-02-19T05:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.782748 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.782817 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.782835 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.782860 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.782879 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:03Z","lastTransitionTime":"2026-02-19T05:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.885392 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.885451 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.885468 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.885498 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.885518 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:03Z","lastTransitionTime":"2026-02-19T05:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.989654 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.989736 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.989761 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.989792 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:03 crc kubenswrapper[5012]: I0219 05:26:03.989816 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:03Z","lastTransitionTime":"2026-02-19T05:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.093759 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.093859 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.093880 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.093950 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.093970 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:04Z","lastTransitionTime":"2026-02-19T05:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.198168 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.198227 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.198245 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.198273 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.198291 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:04Z","lastTransitionTime":"2026-02-19T05:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.301757 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.301827 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.301848 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.301875 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.301897 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:04Z","lastTransitionTime":"2026-02-19T05:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.412193 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.412268 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.412289 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.412362 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.412386 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:04Z","lastTransitionTime":"2026-02-19T05:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.516029 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.516095 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.516117 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.516146 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.516167 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:04Z","lastTransitionTime":"2026-02-19T05:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.619972 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.620040 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.620057 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.620085 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.620107 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:04Z","lastTransitionTime":"2026-02-19T05:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.671910 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:04:15.977636594 +0000 UTC Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.702181 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.702263 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.702263 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:04 crc kubenswrapper[5012]: E0219 05:26:04.702432 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.702457 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:04 crc kubenswrapper[5012]: E0219 05:26:04.702583 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:04 crc kubenswrapper[5012]: E0219 05:26:04.702809 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:04 crc kubenswrapper[5012]: E0219 05:26:04.702944 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.723463 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.723550 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.723574 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.723610 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.723634 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:04Z","lastTransitionTime":"2026-02-19T05:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.724247 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.748029 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.784567 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:53Z\\\",\\\"message\\\":\\\"dler 4\\\\nI0219 05:25:53.703172 6818 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 05:25:53.703192 6818 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 05:25:53.703216 6818 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:53.703233 6818 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:53.703255 6818 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:53.703272 6818 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:53.703270 6818 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 05:25:53.703293 6818 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:53.703340 6818 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 05:25:53.703336 6818 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:53.703354 6818 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:53.703357 6818 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:53.703381 6818 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:53.703389 6818 factory.go:656] Stopping watch factory\\\\nI0219 05:25:53.703404 6818 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:53.703415 6818 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.804820 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.826839 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.826926 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.826943 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.827005 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.827023 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:04Z","lastTransitionTime":"2026-02-19T05:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.827628 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9e62de-d3da-441f-872c-041155358f5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e98f19db4c5c9195d053af37d083f2878b34c43f6dde196474accc5b50e889f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9240f7aeba9def91f54641b0bf6f8d9e6a8e5eb8f7e46b910372c425616e5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74056ba46dcbd9e83d8283e28be385218df1a2a25007e74cc991865249c81eb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.849825 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.867216 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.883871 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.904104 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.917541 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.929788 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.929852 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.929875 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.929908 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.929934 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:04Z","lastTransitionTime":"2026-02-19T05:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.940209 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.960336 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:04 crc kubenswrapper[5012]: I0219 05:26:04.985008 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:04Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.003971 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:05Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.020700 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:05Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.033031 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.033092 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.033114 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.033137 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.033158 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:05Z","lastTransitionTime":"2026-02-19T05:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.041334 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:05Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.061775 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:05Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.136191 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.136269 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.136288 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.136435 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.136465 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:05Z","lastTransitionTime":"2026-02-19T05:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.240063 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.240630 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.240646 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.240674 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.240690 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:05Z","lastTransitionTime":"2026-02-19T05:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.344171 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.344275 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.344294 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.344329 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.344346 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:05Z","lastTransitionTime":"2026-02-19T05:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.447480 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.447545 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.447565 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.447592 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.447611 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:05Z","lastTransitionTime":"2026-02-19T05:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.551391 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.551472 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.551490 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.551515 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.551532 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:05Z","lastTransitionTime":"2026-02-19T05:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.654836 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.654887 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.654903 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.654924 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.654937 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:05Z","lastTransitionTime":"2026-02-19T05:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.672508 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:26:03.654661397 +0000 UTC Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.757860 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.757920 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.757936 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.757962 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.757979 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:05Z","lastTransitionTime":"2026-02-19T05:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.862256 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.862350 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.862375 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.862410 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.862431 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:05Z","lastTransitionTime":"2026-02-19T05:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.966069 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.966129 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.966150 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.966178 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:05 crc kubenswrapper[5012]: I0219 05:26:05.966195 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:05Z","lastTransitionTime":"2026-02-19T05:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.069829 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.069926 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.069946 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.069973 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.069997 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:06Z","lastTransitionTime":"2026-02-19T05:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.173386 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.173449 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.173466 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.173491 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.173509 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:06Z","lastTransitionTime":"2026-02-19T05:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.276984 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.277030 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.277046 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.277068 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.277084 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:06Z","lastTransitionTime":"2026-02-19T05:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.380128 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.380493 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.380684 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.380887 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.381024 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:06Z","lastTransitionTime":"2026-02-19T05:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.489727 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.490102 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.490262 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.490467 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.490619 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:06Z","lastTransitionTime":"2026-02-19T05:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.594418 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.594887 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.595044 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.595195 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.595358 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:06Z","lastTransitionTime":"2026-02-19T05:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.673377 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:32:08.195725191 +0000 UTC Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.699444 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.699677 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.699820 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.699950 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.700074 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:06Z","lastTransitionTime":"2026-02-19T05:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.703491 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.703698 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.703559 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.703498 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:06 crc kubenswrapper[5012]: E0219 05:26:06.704096 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:06 crc kubenswrapper[5012]: E0219 05:26:06.704361 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:06 crc kubenswrapper[5012]: E0219 05:26:06.705044 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:06 crc kubenswrapper[5012]: E0219 05:26:06.705252 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.706354 5012 scope.go:117] "RemoveContainer" containerID="13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f" Feb 19 05:26:06 crc kubenswrapper[5012]: E0219 05:26:06.706859 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.804177 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.804224 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.804234 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.804255 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.804265 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:06Z","lastTransitionTime":"2026-02-19T05:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.907555 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.907601 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.907614 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.907635 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:06 crc kubenswrapper[5012]: I0219 05:26:06.907654 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:06Z","lastTransitionTime":"2026-02-19T05:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.010668 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.010719 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.010734 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.010755 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.010769 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:07Z","lastTransitionTime":"2026-02-19T05:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.113718 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.113768 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.113783 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.113801 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.113814 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:07Z","lastTransitionTime":"2026-02-19T05:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.216498 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.216563 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.216579 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.216605 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.216623 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:07Z","lastTransitionTime":"2026-02-19T05:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.320108 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.320177 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.320196 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.320248 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.320267 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:07Z","lastTransitionTime":"2026-02-19T05:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.423098 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.423134 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.423162 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.423179 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.423189 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:07Z","lastTransitionTime":"2026-02-19T05:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.526777 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.526854 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.526878 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.526909 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.526930 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:07Z","lastTransitionTime":"2026-02-19T05:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.630699 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.630796 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.630816 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.630846 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.630867 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:07Z","lastTransitionTime":"2026-02-19T05:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.674029 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:34:57.646779497 +0000 UTC Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.734821 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.734957 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.734979 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.735006 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.735025 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:07Z","lastTransitionTime":"2026-02-19T05:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.838246 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.838376 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.838403 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.838441 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.838471 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:07Z","lastTransitionTime":"2026-02-19T05:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.940849 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.940923 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.940948 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.940979 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:07 crc kubenswrapper[5012]: I0219 05:26:07.941008 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:07Z","lastTransitionTime":"2026-02-19T05:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.044011 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.044066 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.044082 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.044113 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.044132 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:08Z","lastTransitionTime":"2026-02-19T05:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.146936 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.146994 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.147012 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.147036 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.147056 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:08Z","lastTransitionTime":"2026-02-19T05:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.249662 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.249721 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.249742 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.249766 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.249784 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:08Z","lastTransitionTime":"2026-02-19T05:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.352680 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.352761 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.352787 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.352817 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.352843 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:08Z","lastTransitionTime":"2026-02-19T05:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.456211 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.456264 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.456281 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.456334 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.456354 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:08Z","lastTransitionTime":"2026-02-19T05:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.558513 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.558545 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.558553 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.558570 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.558579 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:08Z","lastTransitionTime":"2026-02-19T05:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.661571 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.661604 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.661619 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.661639 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.661656 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:08Z","lastTransitionTime":"2026-02-19T05:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.675100 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 19:59:05.875333516 +0000 UTC Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.702492 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.702594 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:08 crc kubenswrapper[5012]: E0219 05:26:08.702665 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.702710 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.702747 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:08 crc kubenswrapper[5012]: E0219 05:26:08.702896 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:08 crc kubenswrapper[5012]: E0219 05:26:08.703037 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:08 crc kubenswrapper[5012]: E0219 05:26:08.703159 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.764399 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.764427 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.764436 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.764451 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.764460 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:08Z","lastTransitionTime":"2026-02-19T05:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.866477 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.866508 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.866517 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.866529 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.866538 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:08Z","lastTransitionTime":"2026-02-19T05:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.968749 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.968770 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.968779 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.968792 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:08 crc kubenswrapper[5012]: I0219 05:26:08.968801 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:08Z","lastTransitionTime":"2026-02-19T05:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.072536 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.072571 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.072580 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.072594 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.072602 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:09Z","lastTransitionTime":"2026-02-19T05:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.174820 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.174858 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.174866 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.174882 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.174892 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:09Z","lastTransitionTime":"2026-02-19T05:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.276392 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.276431 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.276441 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.276456 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.276467 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:09Z","lastTransitionTime":"2026-02-19T05:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.379103 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.379141 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.379150 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.379164 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.379174 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:09Z","lastTransitionTime":"2026-02-19T05:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.481719 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.481790 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.481813 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.481843 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.481865 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:09Z","lastTransitionTime":"2026-02-19T05:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.585002 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.585042 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.585054 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.585071 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.585082 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:09Z","lastTransitionTime":"2026-02-19T05:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.675598 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:50:56.2042401 +0000 UTC Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.687811 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.687840 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.687850 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.687862 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.687872 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:09Z","lastTransitionTime":"2026-02-19T05:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.790952 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.791040 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.791069 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.791101 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.791129 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:09Z","lastTransitionTime":"2026-02-19T05:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.894493 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.894548 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.894559 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.894574 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.894587 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:09Z","lastTransitionTime":"2026-02-19T05:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.997283 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.997334 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.997343 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.997356 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:09 crc kubenswrapper[5012]: I0219 05:26:09.997364 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:09Z","lastTransitionTime":"2026-02-19T05:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.101052 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.101096 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.101106 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.101121 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.101132 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:10Z","lastTransitionTime":"2026-02-19T05:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.204388 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.204435 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.204452 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.204478 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.204496 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:10Z","lastTransitionTime":"2026-02-19T05:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.307615 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.307663 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.307675 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.307693 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.307707 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:10Z","lastTransitionTime":"2026-02-19T05:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.409791 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.409843 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.409862 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.409885 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.409904 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:10Z","lastTransitionTime":"2026-02-19T05:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.512372 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.512429 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.512447 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.512469 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.512488 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:10Z","lastTransitionTime":"2026-02-19T05:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.615164 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.615218 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.615235 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.615255 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.615271 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:10Z","lastTransitionTime":"2026-02-19T05:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.675951 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 21:31:00.295641366 +0000 UTC Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.702516 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.702534 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.702568 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.702625 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:10 crc kubenswrapper[5012]: E0219 05:26:10.702789 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:10 crc kubenswrapper[5012]: E0219 05:26:10.702933 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:10 crc kubenswrapper[5012]: E0219 05:26:10.703126 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:10 crc kubenswrapper[5012]: E0219 05:26:10.703348 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.717489 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.717532 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.717551 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.717574 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.717591 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:10Z","lastTransitionTime":"2026-02-19T05:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.819979 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.820031 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.820046 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.820073 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.820090 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:10Z","lastTransitionTime":"2026-02-19T05:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.923289 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.923394 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.923417 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.923449 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:10 crc kubenswrapper[5012]: I0219 05:26:10.923473 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:10Z","lastTransitionTime":"2026-02-19T05:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.026801 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.026852 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.026869 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.026891 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.026907 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.129461 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.129496 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.129508 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.129525 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.129538 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.231663 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.231689 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.231698 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.231708 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.231717 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.333610 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.333899 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.333960 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.334021 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.334080 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.436950 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.436998 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.437012 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.437031 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.437045 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.540479 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.540536 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.540552 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.540577 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.540592 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.643087 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.643153 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.643164 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.643183 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.643198 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.676950 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:51:16.405904806 +0000 UTC Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.745511 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.745741 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.745809 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.745886 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.745940 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.848438 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.848463 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.848471 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.848485 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.848495 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.951720 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.951773 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.951786 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.951805 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.951818 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.971986 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.972052 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.972071 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.972096 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.972116 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:11 crc kubenswrapper[5012]: E0219 05:26:11.986848 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:11Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.991713 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.991770 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.991788 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.991815 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:11 crc kubenswrapper[5012]: I0219 05:26:11.991835 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:11Z","lastTransitionTime":"2026-02-19T05:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.004431 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:12Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.008316 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.008421 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.008628 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.008714 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.008774 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.024394 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:12Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.027377 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.027472 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.027528 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.027588 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.027641 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.037922 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:12Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.041411 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.041452 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.041461 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.041477 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.041489 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.054793 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:12Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.055038 5012 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.057428 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.057531 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.057594 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.057661 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.057736 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.160088 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.160118 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.160126 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.160138 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.160147 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.262207 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.262253 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.262264 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.262284 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.262320 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.364806 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.365090 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.365154 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.365220 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.365290 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.393971 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.394293 5012 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.394466 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs podName:2e231950-a365-4a82-9481-05fdac171449 nodeName:}" failed. No retries permitted until 2026-02-19 05:26:44.394432892 +0000 UTC m=+100.427755531 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs") pod "network-metrics-daemon-q5cb2" (UID: "2e231950-a365-4a82-9481-05fdac171449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.468522 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.468626 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.468645 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.468702 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.468722 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.574557 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.574593 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.574604 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.574621 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.574634 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.677061 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 03:07:50.09834755 +0000 UTC Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.677539 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.677619 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.677637 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.677671 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.677724 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.702476 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.702509 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.702579 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.702726 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.703020 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.703344 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.703441 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:12 crc kubenswrapper[5012]: E0219 05:26:12.703641 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.780873 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.780947 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.780972 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.781004 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.781023 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.884651 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.884692 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.884701 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.884719 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.884729 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.986897 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.986955 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.986975 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.986995 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:12 crc kubenswrapper[5012]: I0219 05:26:12.987010 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:12Z","lastTransitionTime":"2026-02-19T05:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.090099 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.090134 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.090148 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.090163 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.090173 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:13Z","lastTransitionTime":"2026-02-19T05:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.193410 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.193440 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.193450 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.193464 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.193474 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:13Z","lastTransitionTime":"2026-02-19T05:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.295526 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.295552 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.295561 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.295574 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.295584 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:13Z","lastTransitionTime":"2026-02-19T05:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.397865 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.397942 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.397963 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.397985 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.398002 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:13Z","lastTransitionTime":"2026-02-19T05:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.501381 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.501468 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.501487 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.501540 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.501556 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:13Z","lastTransitionTime":"2026-02-19T05:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.604151 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.604236 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.604255 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.604282 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.604335 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:13Z","lastTransitionTime":"2026-02-19T05:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.677370 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:01:02.128881556 +0000 UTC Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.707727 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.707777 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.707788 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.707806 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.707818 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:13Z","lastTransitionTime":"2026-02-19T05:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.811158 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.811246 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.811278 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.811353 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.811384 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:13Z","lastTransitionTime":"2026-02-19T05:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.914958 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.915145 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.915502 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.915546 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:13 crc kubenswrapper[5012]: I0219 05:26:13.915574 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:13Z","lastTransitionTime":"2026-02-19T05:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.018985 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.019047 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.019063 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.019090 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.019110 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:14Z","lastTransitionTime":"2026-02-19T05:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.121849 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.121907 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.121926 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.121946 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.121958 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:14Z","lastTransitionTime":"2026-02-19T05:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.225209 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.225258 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.225267 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.225285 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.225295 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:14Z","lastTransitionTime":"2026-02-19T05:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.289460 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lkrsg_e7a04e36-fbaa-4de1-871a-7225433eebb0/kube-multus/0.log" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.289514 5012 generic.go:334] "Generic (PLEG): container finished" podID="e7a04e36-fbaa-4de1-871a-7225433eebb0" containerID="10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061" exitCode=1 Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.289565 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lkrsg" event={"ID":"e7a04e36-fbaa-4de1-871a-7225433eebb0","Type":"ContainerDied","Data":"10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.290683 5012 scope.go:117] "RemoveContainer" containerID="10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.310153 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:26:13Z\\\",\\\"message\\\":\\\"2026-02-19T05:25:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905\\\\n2026-02-19T05:25:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905 to /host/opt/cni/bin/\\\\n2026-02-19T05:25:28Z [verbose] multus-daemon started\\\\n2026-02-19T05:25:28Z [verbose] Readiness Indicator file check\\\\n2026-02-19T05:26:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.330094 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.330483 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.330539 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.330558 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.330585 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.330603 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:14Z","lastTransitionTime":"2026-02-19T05:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.345582 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.362369 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.382581 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.406615 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.427625 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.433854 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.433906 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.433918 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.433934 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.433946 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:14Z","lastTransitionTime":"2026-02-19T05:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.443180 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.464492 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.480434 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.502034 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.522592 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.537152 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.537210 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.537227 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.537258 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.537278 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:14Z","lastTransitionTime":"2026-02-19T05:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.555924 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:53Z\\\",\\\"message\\\":\\\"dler 4\\\\nI0219 05:25:53.703172 6818 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 05:25:53.703192 6818 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 05:25:53.703216 6818 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:53.703233 6818 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:53.703255 6818 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:53.703272 6818 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:53.703270 6818 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 05:25:53.703293 6818 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:53.703340 6818 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 05:25:53.703336 6818 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:53.703354 6818 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:53.703357 6818 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:53.703381 6818 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:53.703389 6818 factory.go:656] Stopping watch factory\\\\nI0219 05:25:53.703404 6818 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:53.703415 6818 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.576134 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.589873 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9e62de-d3da-441f-872c-041155358f5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e98f19db4c5c9195d053af37d083f2878b34c43f6dde196474accc5b50e889f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9240f7aeba9def91f54641b0bf6f8d9e6a8e5eb8f7e46b910372c425616e5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74056ba46dcbd9e83d8283e28be385218df1a2a25007e74cc991865249c81eb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.606043 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.622672 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.639569 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.639632 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.639650 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.639682 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.639704 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:14Z","lastTransitionTime":"2026-02-19T05:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.677773 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 12:53:00.305287547 +0000 UTC Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.702191 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.702273 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:14 crc kubenswrapper[5012]: E0219 05:26:14.702343 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.702458 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.702555 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:14 crc kubenswrapper[5012]: E0219 05:26:14.702595 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:14 crc kubenswrapper[5012]: E0219 05:26:14.702746 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:14 crc kubenswrapper[5012]: E0219 05:26:14.702899 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.723045 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.741211 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.741234 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.741242 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.741255 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.741264 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:14Z","lastTransitionTime":"2026-02-19T05:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.741746 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.765575 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.782026 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.798479 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.818873 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.835809 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.844570 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.844621 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.844639 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.844668 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.844688 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:14Z","lastTransitionTime":"2026-02-19T05:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.855062 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.870970 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.893169 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:53Z\\\",\\\"message\\\":\\\"dler 4\\\\nI0219 05:25:53.703172 6818 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 05:25:53.703192 6818 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 05:25:53.703216 6818 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:53.703233 6818 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:53.703255 6818 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:53.703272 6818 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:53.703270 6818 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 05:25:53.703293 6818 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:53.703340 6818 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 05:25:53.703336 6818 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:53.703354 6818 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:53.703357 6818 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:53.703381 6818 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:53.703389 6818 factory.go:656] Stopping watch factory\\\\nI0219 05:25:53.703404 6818 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:53.703415 6818 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.908585 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.924769 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9e62de-d3da-441f-872c-041155358f5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e98f19db4c5c9195d053af37d083f2878b34c43f6dde196474accc5b50e889f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9240f7aeba9def91f54641b0bf6f8d9e6a8e5eb8f7e46b910372c425616e5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74056ba46dcbd9e83d8283e28be385218df1a2a25007e74cc991865249c81eb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.938399 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.947346 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.947426 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.947442 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.947462 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.947496 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:14Z","lastTransitionTime":"2026-02-19T05:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.951221 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.965900 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:26:13Z\\\",\\\"message\\\":\\\"2026-02-19T05:25:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905\\\\n2026-02-19T05:25:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905 to /host/opt/cni/bin/\\\\n2026-02-19T05:25:28Z [verbose] multus-daemon started\\\\n2026-02-19T05:25:28Z [verbose] Readiness Indicator file check\\\\n2026-02-19T05:26:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.980806 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:14 crc kubenswrapper[5012]: I0219 05:26:14.995088 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:14Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.050489 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.050539 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.050551 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.050573 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.050600 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:15Z","lastTransitionTime":"2026-02-19T05:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.153665 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.153714 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.153724 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.153741 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.153752 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:15Z","lastTransitionTime":"2026-02-19T05:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.257021 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.257071 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.257084 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.257101 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.257112 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:15Z","lastTransitionTime":"2026-02-19T05:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.295561 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lkrsg_e7a04e36-fbaa-4de1-871a-7225433eebb0/kube-multus/0.log" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.295636 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lkrsg" event={"ID":"e7a04e36-fbaa-4de1-871a-7225433eebb0","Type":"ContainerStarted","Data":"fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd"} Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.308735 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.322773 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.341557 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:26:13Z\\\",\\\"message\\\":\\\"2026-02-19T05:25:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905\\\\n2026-02-19T05:25:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905 to /host/opt/cni/bin/\\\\n2026-02-19T05:25:28Z [verbose] multus-daemon started\\\\n2026-02-19T05:25:28Z [verbose] Readiness Indicator file check\\\\n2026-02-19T05:26:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.362570 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.362912 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.362952 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.362971 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.362998 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.363017 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:15Z","lastTransitionTime":"2026-02-19T05:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.378848 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.398294 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.415233 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.436793 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.452274 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.465570 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.465620 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.465636 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.465660 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.465678 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:15Z","lastTransitionTime":"2026-02-19T05:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.467397 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.484865 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.503109 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.529715 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:53Z\\\",\\\"message\\\":\\\"dler 4\\\\nI0219 05:25:53.703172 6818 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 05:25:53.703192 6818 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 05:25:53.703216 6818 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:53.703233 6818 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:53.703255 6818 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:53.703272 6818 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:53.703270 6818 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 05:25:53.703293 6818 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:53.703340 6818 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 05:25:53.703336 6818 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:53.703354 6818 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:53.703357 6818 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:53.703381 6818 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:53.703389 6818 factory.go:656] Stopping watch factory\\\\nI0219 05:25:53.703404 6818 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:53.703415 6818 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.560411 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9e62de-d3da-441f-872c-041155358f5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e98f19db4c5c9195d053af37d083f2878b34c43f6dde196474accc5b50e889f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9240f7aeba9def91f54641b0bf6f8d9e6a8e5eb8f7e46b910372c425616e5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74056ba46dcbd9e83d8283e28be385218df1a2a25007e74cc991865249c81eb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.568253 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.568540 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.568618 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.568696 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.568763 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:15Z","lastTransitionTime":"2026-02-19T05:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.583592 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.604728 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.618419 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:15Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.671136 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.671178 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.671187 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.671203 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.671214 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:15Z","lastTransitionTime":"2026-02-19T05:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.678290 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 14:03:04.08401194 +0000 UTC Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.775106 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.775152 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.775160 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.775175 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.775185 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:15Z","lastTransitionTime":"2026-02-19T05:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.878295 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.878366 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.878381 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.878401 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.878415 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:15Z","lastTransitionTime":"2026-02-19T05:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.981497 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.981559 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.981577 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.981605 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:15 crc kubenswrapper[5012]: I0219 05:26:15.981625 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:15Z","lastTransitionTime":"2026-02-19T05:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.084823 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.084855 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.084866 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.084880 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.084890 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:16Z","lastTransitionTime":"2026-02-19T05:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.187579 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.187640 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.187661 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.187688 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.187708 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:16Z","lastTransitionTime":"2026-02-19T05:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.290662 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.290709 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.290728 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.290752 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.290770 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:16Z","lastTransitionTime":"2026-02-19T05:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.393482 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.393543 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.393565 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.393593 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.393614 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:16Z","lastTransitionTime":"2026-02-19T05:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.496578 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.496649 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.496667 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.496694 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.496713 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:16Z","lastTransitionTime":"2026-02-19T05:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.599536 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.599574 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.599585 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.599601 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.599612 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:16Z","lastTransitionTime":"2026-02-19T05:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.679380 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:26:38.645619917 +0000 UTC Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.702538 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.702572 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.702561 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.702577 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:16 crc kubenswrapper[5012]: E0219 05:26:16.702709 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.702859 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.702888 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.702903 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.702932 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:16 crc kubenswrapper[5012]: E0219 05:26:16.702926 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.702950 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:16Z","lastTransitionTime":"2026-02-19T05:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:16 crc kubenswrapper[5012]: E0219 05:26:16.703141 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:16 crc kubenswrapper[5012]: E0219 05:26:16.703029 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.805899 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.805928 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.805937 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.805951 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.805963 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:16Z","lastTransitionTime":"2026-02-19T05:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.908685 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.908750 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.908758 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.908777 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:16 crc kubenswrapper[5012]: I0219 05:26:16.908791 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:16Z","lastTransitionTime":"2026-02-19T05:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.012430 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.012497 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.012507 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.012525 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.012536 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:17Z","lastTransitionTime":"2026-02-19T05:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.139743 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.139826 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.139842 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.139861 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.139875 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:17Z","lastTransitionTime":"2026-02-19T05:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.245211 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.245247 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.245255 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.245271 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.245284 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:17Z","lastTransitionTime":"2026-02-19T05:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.348392 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.348431 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.348442 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.348459 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.348471 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:17Z","lastTransitionTime":"2026-02-19T05:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.451911 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.451946 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.451958 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.451974 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.451985 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:17Z","lastTransitionTime":"2026-02-19T05:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.555447 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.555476 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.555484 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.555498 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.555507 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:17Z","lastTransitionTime":"2026-02-19T05:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.658335 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.658402 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.658423 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.658450 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.658471 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:17Z","lastTransitionTime":"2026-02-19T05:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.680548 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 15:55:41.129530635 +0000 UTC Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.761549 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.761603 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.761622 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.761650 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.761667 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:17Z","lastTransitionTime":"2026-02-19T05:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.864803 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.864861 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.864878 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.864906 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.864923 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:17Z","lastTransitionTime":"2026-02-19T05:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.968051 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.968118 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.968142 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.968175 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:17 crc kubenswrapper[5012]: I0219 05:26:17.968211 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:17Z","lastTransitionTime":"2026-02-19T05:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.071166 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.071232 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.071248 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.071273 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.071294 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:18Z","lastTransitionTime":"2026-02-19T05:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.173513 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.173583 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.173601 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.173628 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.173645 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:18Z","lastTransitionTime":"2026-02-19T05:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.276042 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.276111 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.276134 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.276160 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.276177 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:18Z","lastTransitionTime":"2026-02-19T05:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.379825 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.379896 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.379913 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.379939 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.379956 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:18Z","lastTransitionTime":"2026-02-19T05:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.483065 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.483114 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.483133 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.483157 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.483176 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:18Z","lastTransitionTime":"2026-02-19T05:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.585186 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.585243 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.585265 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.585292 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.585360 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:18Z","lastTransitionTime":"2026-02-19T05:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.681726 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 00:29:25.761089959 +0000 UTC Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.687465 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.687518 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.687536 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.687586 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.687605 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:18Z","lastTransitionTime":"2026-02-19T05:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.701977 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.702027 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.702100 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:18 crc kubenswrapper[5012]: E0219 05:26:18.702349 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.702408 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:18 crc kubenswrapper[5012]: E0219 05:26:18.702537 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:18 crc kubenswrapper[5012]: E0219 05:26:18.702634 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:18 crc kubenswrapper[5012]: E0219 05:26:18.702733 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.703769 5012 scope.go:117] "RemoveContainer" containerID="13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.790410 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.790764 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.790782 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.790807 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.790826 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:18Z","lastTransitionTime":"2026-02-19T05:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.901252 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.901298 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.901338 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.901362 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:18 crc kubenswrapper[5012]: I0219 05:26:18.901380 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:18Z","lastTransitionTime":"2026-02-19T05:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.005144 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.005201 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.005218 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.005242 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.005259 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:19Z","lastTransitionTime":"2026-02-19T05:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.108750 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.108809 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.108825 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.108849 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.108865 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:19Z","lastTransitionTime":"2026-02-19T05:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.211862 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.211911 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.211927 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.211950 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.211966 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:19Z","lastTransitionTime":"2026-02-19T05:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.312971 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/2.log" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.316459 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.316497 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.316508 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.316525 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.316536 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:19Z","lastTransitionTime":"2026-02-19T05:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.318787 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.319212 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.343643 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.366875 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.400821 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:53Z\\\",\\\"message\\\":\\\"dler 4\\\\nI0219 05:25:53.703172 6818 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 05:25:53.703192 6818 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 05:25:53.703216 6818 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:53.703233 6818 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:53.703255 6818 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:53.703272 6818 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:53.703270 6818 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 05:25:53.703293 6818 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:53.703340 6818 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 05:25:53.703336 6818 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:53.703354 6818 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:53.703357 6818 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:53.703381 6818 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:53.703389 6818 factory.go:656] Stopping watch factory\\\\nI0219 05:25:53.703404 6818 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:53.703415 6818 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.418978 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9e62de-d3da-441f-872c-041155358f5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e98f19db4c5c9195d053af37d083f2878b34c43f6dde196474accc5b50e889f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9240f7aeba9def91f54641b0bf6f8d9e6a8e5eb8f7e46b910372c425616e5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74056ba46dcbd9e83d8283e28be385218df1a2a25007e74cc991865249c81eb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.422261 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.422289 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.422334 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.422349 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.422359 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:19Z","lastTransitionTime":"2026-02-19T05:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.436518 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.450348 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.465354 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.483696 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.499350 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.515888 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:26:13Z\\\",\\\"message\\\":\\\"2026-02-19T05:25:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905\\\\n2026-02-19T05:25:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905 to /host/opt/cni/bin/\\\\n2026-02-19T05:25:28Z [verbose] multus-daemon started\\\\n2026-02-19T05:25:28Z [verbose] Readiness Indicator file check\\\\n2026-02-19T05:26:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.524616 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.524636 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.524643 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.524656 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.524664 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:19Z","lastTransitionTime":"2026-02-19T05:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.539231 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.560076 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.573896 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.586636 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.606901 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.621915 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.627720 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.627945 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.628134 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.628286 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.628465 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:19Z","lastTransitionTime":"2026-02-19T05:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.639349 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:19Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.681987 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 06:00:45.35633727 +0000 UTC Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.731284 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.731686 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.731900 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.732063 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.732208 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:19Z","lastTransitionTime":"2026-02-19T05:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.834541 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.834917 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.835081 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.835229 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.835399 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:19Z","lastTransitionTime":"2026-02-19T05:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.937659 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.937692 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.937700 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.937715 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:19 crc kubenswrapper[5012]: I0219 05:26:19.937724 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:19Z","lastTransitionTime":"2026-02-19T05:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.040526 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.040808 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.040972 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.041152 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.041340 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:20Z","lastTransitionTime":"2026-02-19T05:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.144944 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.145229 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.145424 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.145556 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.145717 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:20Z","lastTransitionTime":"2026-02-19T05:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.248984 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.249442 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.249572 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.249693 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.249841 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:20Z","lastTransitionTime":"2026-02-19T05:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.325143 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/3.log" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.326861 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/2.log" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.331439 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" exitCode=1 Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.331655 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.331761 5012 scope.go:117] "RemoveContainer" containerID="13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.332889 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:26:20 crc kubenswrapper[5012]: E0219 05:26:20.333388 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.352398 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9e62de-d3da-441f-872c-041155358f5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e98f19db4c5c9195d053af37d083f2878b34c43f6dde196474accc5b50e889f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9240f7aeba9def91f54641b0bf6f8d9e6a8e5eb8f7e46b910372c425616e5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74056ba46dcbd9e83d8283e28be385218df1a2a25007e74cc991865249c81eb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.353172 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.353216 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.353232 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.353256 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.353273 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:20Z","lastTransitionTime":"2026-02-19T05:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.374840 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.392834 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.409010 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.425213 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.443127 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.458180 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.458265 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.458289 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.458389 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.458416 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:20Z","lastTransitionTime":"2026-02-19T05:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.465454 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:26:13Z\\\",\\\"message\\\":\\\"2026-02-19T05:25:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905\\\\n2026-02-19T05:25:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905 to /host/opt/cni/bin/\\\\n2026-02-19T05:25:28Z [verbose] multus-daemon started\\\\n2026-02-19T05:25:28Z [verbose] Readiness Indicator file check\\\\n2026-02-19T05:26:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.480379 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.500572 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.522208 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.541763 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.561123 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.561583 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.561662 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.561688 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.561720 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.561746 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:20Z","lastTransitionTime":"2026-02-19T05:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.590269 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.607763 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.627351 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.648788 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.666355 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.666415 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.666432 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.666456 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.666473 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:20Z","lastTransitionTime":"2026-02-19T05:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.682966 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 00:12:34.208625139 +0000 UTC Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.685670 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13374c2adef5bccd4b6092472f7a77d4a70f5871dc49cacdc29e111acde1078f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:25:53Z\\\",\\\"message\\\":\\\"dler 4\\\\nI0219 05:25:53.703172 6818 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 05:25:53.703192 6818 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 05:25:53.703216 6818 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:25:53.703233 6818 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:25:53.703255 6818 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 05:25:53.703272 6818 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 05:25:53.703270 6818 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 05:25:53.703293 6818 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:25:53.703340 6818 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 05:25:53.703336 6818 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:25:53.703354 6818 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 05:25:53.703357 6818 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 05:25:53.703381 6818 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 05:25:53.703389 6818 factory.go:656] Stopping watch factory\\\\nI0219 05:25:53.703404 6818 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:25:53.703415 6818 ovnkube.go:599] Stopped ovnkube\\\\nI0219 05:25:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:26:20Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI0219 05:26:19.894801 7304 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:26:19.895084 7304 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:26:19.895385 7304 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:26:19.895484 7304 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:26:19.895522 7304 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:26:19.895545 7304 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:26:19.895554 7304 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:26:19.895558 7304 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:26:19.895561 7304 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:26:19.895582 7304 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:26:19.895582 7304 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 05:26:19.895591 7304 factory.go:656] Stopping watch factory\\\\nI0219 05:26:19.895608 7304 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:20Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.702126 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.702187 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.702160 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:20 crc kubenswrapper[5012]: E0219 05:26:20.702368 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.702453 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:20 crc kubenswrapper[5012]: E0219 05:26:20.702572 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:20 crc kubenswrapper[5012]: E0219 05:26:20.702681 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:20 crc kubenswrapper[5012]: E0219 05:26:20.702822 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.769246 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.769341 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.769359 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.769383 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.769403 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:20Z","lastTransitionTime":"2026-02-19T05:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.872545 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.872613 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.872629 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.872652 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.872669 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:20Z","lastTransitionTime":"2026-02-19T05:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.976258 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.976348 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.976369 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.976395 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:20 crc kubenswrapper[5012]: I0219 05:26:20.976413 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:20Z","lastTransitionTime":"2026-02-19T05:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.080428 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.080492 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.080510 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.080536 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.080556 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:21Z","lastTransitionTime":"2026-02-19T05:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.183138 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.183203 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.183220 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.183245 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.183264 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:21Z","lastTransitionTime":"2026-02-19T05:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.285232 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.285272 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.285284 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.285319 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.285333 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:21Z","lastTransitionTime":"2026-02-19T05:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.338274 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/3.log" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.343849 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:26:21 crc kubenswrapper[5012]: E0219 05:26:21.344105 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.362611 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9e62de-d3da-441f-872c-041155358f5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e98f19db4c5c9195d053af37d083f2878b34c43f6dde196474accc5b50e889f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9240f7aeba9def91f54641b0bf6f8d9e6a8e5eb8f7e46b910372c425616e5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74056ba46dcbd9e83d8283e28be385218df1a2a25007e74cc991865249c81eb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.378098 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.387446 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.387484 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.387496 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.387515 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.387528 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:21Z","lastTransitionTime":"2026-02-19T05:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.391269 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.410063 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.430565 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.449274 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.469275 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:26:13Z\\\",\\\"message\\\":\\\"2026-02-19T05:25:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905\\\\n2026-02-19T05:25:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905 to /host/opt/cni/bin/\\\\n2026-02-19T05:25:28Z [verbose] multus-daemon started\\\\n2026-02-19T05:25:28Z [verbose] Readiness Indicator file check\\\\n2026-02-19T05:26:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.487158 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.490233 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.490282 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.490328 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.490355 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.490374 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:21Z","lastTransitionTime":"2026-02-19T05:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.502860 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.523255 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.541955 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.562025 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.579296 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.593677 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.593731 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.593750 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.593776 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.593793 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:21Z","lastTransitionTime":"2026-02-19T05:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.606139 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.626196 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.648264 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.680027 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:26:20Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI0219 05:26:19.894801 7304 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:26:19.895084 7304 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:26:19.895385 7304 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:26:19.895484 7304 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:26:19.895522 7304 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:26:19.895545 7304 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:26:19.895554 7304 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:26:19.895558 7304 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:26:19.895561 7304 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:26:19.895582 7304 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:26:19.895582 7304 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 05:26:19.895591 7304 factory.go:656] Stopping watch factory\\\\nI0219 05:26:19.895608 7304 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:26:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:21Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.683983 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 04:11:11.370053429 +0000 UTC Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.696220 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.696278 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.696295 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.696350 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.696369 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:21Z","lastTransitionTime":"2026-02-19T05:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.798437 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.798530 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.798554 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.798586 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.798605 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:21Z","lastTransitionTime":"2026-02-19T05:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.901970 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.902016 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.902028 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.902045 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:21 crc kubenswrapper[5012]: I0219 05:26:21.902057 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:21Z","lastTransitionTime":"2026-02-19T05:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.005713 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.005781 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.005802 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.005827 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.005845 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.108809 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.108852 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.108863 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.108881 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.108894 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.211212 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.211279 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.211297 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.211361 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.211384 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.314472 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.314548 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.314567 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.314593 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.314612 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.351132 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.351214 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.351239 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.351270 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.351294 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: E0219 05:26:22.373518 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:22Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.378570 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.378624 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.378642 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.378662 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.378679 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: E0219 05:26:22.399388 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:22Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.404087 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.404153 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.404171 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.404198 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.404215 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: E0219 05:26:22.424418 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:22Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.429498 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.429541 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.429559 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.429582 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.429598 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: E0219 05:26:22.449677 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:22Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.454944 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.455023 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.455047 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.455080 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.455106 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: E0219 05:26:22.475058 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:22Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:22 crc kubenswrapper[5012]: E0219 05:26:22.475292 5012 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.477537 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.477670 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.477698 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.477730 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.477752 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.580681 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.580732 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.580751 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.580777 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.580826 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.683093 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.683162 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.683185 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.683216 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.683244 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.684066 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:20:32.178058094 +0000 UTC Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.702694 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:22 crc kubenswrapper[5012]: E0219 05:26:22.702863 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.702954 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:22 crc kubenswrapper[5012]: E0219 05:26:22.703038 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.703593 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.703686 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:22 crc kubenswrapper[5012]: E0219 05:26:22.703829 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:22 crc kubenswrapper[5012]: E0219 05:26:22.703968 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.786252 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.786319 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.786332 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.786351 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.786364 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.888653 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.888722 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.888745 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.888774 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.888798 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.991351 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.991400 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.991411 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.991431 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:22 crc kubenswrapper[5012]: I0219 05:26:22.991443 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:22Z","lastTransitionTime":"2026-02-19T05:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.093869 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.093932 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.093951 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.093976 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.093996 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:23Z","lastTransitionTime":"2026-02-19T05:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.196995 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.197033 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.197041 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.197056 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.197068 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:23Z","lastTransitionTime":"2026-02-19T05:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.302470 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.302506 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.302514 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.302528 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.302538 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:23Z","lastTransitionTime":"2026-02-19T05:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.405277 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.405729 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.405901 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.406071 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.406215 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:23Z","lastTransitionTime":"2026-02-19T05:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.508445 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.508509 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.508536 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.508559 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.508576 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:23Z","lastTransitionTime":"2026-02-19T05:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.611971 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.612026 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.612044 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.612069 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.612086 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:23Z","lastTransitionTime":"2026-02-19T05:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.684218 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 01:17:36.128638914 +0000 UTC Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.714963 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.715008 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.715021 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.715049 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.715062 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:23Z","lastTransitionTime":"2026-02-19T05:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.818217 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.818247 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.818256 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.818268 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.818279 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:23Z","lastTransitionTime":"2026-02-19T05:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.920732 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.920806 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.920830 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.920856 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:23 crc kubenswrapper[5012]: I0219 05:26:23.920873 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:23Z","lastTransitionTime":"2026-02-19T05:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.023398 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.023456 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.023474 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.023498 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.023516 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:24Z","lastTransitionTime":"2026-02-19T05:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.127020 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.127358 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.127545 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.127725 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.127895 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:24Z","lastTransitionTime":"2026-02-19T05:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.249856 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.249915 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.249932 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.249959 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.249977 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:24Z","lastTransitionTime":"2026-02-19T05:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.352995 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.353050 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.353067 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.353089 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.353106 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:24Z","lastTransitionTime":"2026-02-19T05:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.456020 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.456072 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.456089 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.456113 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.456130 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:24Z","lastTransitionTime":"2026-02-19T05:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.559243 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.559341 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.559360 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.559381 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.559397 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:24Z","lastTransitionTime":"2026-02-19T05:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.661821 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.662198 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.662377 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.662544 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.662680 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:24Z","lastTransitionTime":"2026-02-19T05:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.685248 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 02:39:06.905668395 +0000 UTC Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.702606 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.702683 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:24 crc kubenswrapper[5012]: E0219 05:26:24.702984 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.703071 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.703076 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:24 crc kubenswrapper[5012]: E0219 05:26:24.703155 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:24 crc kubenswrapper[5012]: E0219 05:26:24.703257 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:24 crc kubenswrapper[5012]: E0219 05:26:24.703386 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.720786 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e2860323f3a6efd55cc004f26f34aa817f5db6b62294f6b2003df9603df9691\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.744807 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f3af476-577a-46f9-a71c-60fab8fdaa68\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b0a5d75c0c52299115ad9c3e55b1aac10a6f6f1da17b63d43ac32c4dcfe82bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://904b7937e7011c097a2e231d4f26bfdc682943322efd2e452b731a20172d54c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5117c707083c1cb3acacdfede5bb718a34155a7a48d143be73e054f8f1d069f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fccdd9beeac81b188e9f8e434d87c5b02e486261beb67984c0e160d22c0c525e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://907d150597985503e177c90733caffd9979a83fc613eaa813cc73f6c9ac88827\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b04d07d54772f43e201b923cb9635c40920fce50c0d88907ca0c3beb86308f68\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b9d6be254bce8165b930815de949b06ab21597842f1358db7263d24c778306a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94dgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wv2tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.762196 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa645bc5-8cc3-45bc-be2e-7cf7d53abba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8de32c21b4b62fe1413084dd27d5e04d2ec5807a650e01d4c2efabf42e166187\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0ebb0e9d1778b3c057dedd85b449afade675e29e9e93e9fad747da229ebb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wvjcr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gncl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.766790 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.766928 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.766956 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.766986 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.767220 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:24Z","lastTransitionTime":"2026-02-19T05:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.779276 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sh856" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e445e06-98fd-4fc2-b480-58ddf368aeb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dd59cbd4799436c61f7177d6bb0464b62e5d4ef46a1e5e330364c906fca7ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gf7wt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sh856\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.802926 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 05:25:18.444395 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 05:25:18.451355 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3573987533/tls.crt::/tmp/serving-cert-3573987533/tls.key\\\\\\\"\\\\nI0219 05:25:24.040260 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 05:25:24.043116 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 05:25:24.043134 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 05:25:24.043156 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 05:25:24.043161 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 05:25:24.051072 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 05:25:24.051101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051108 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 05:25:24.051114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 05:25:24.051118 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 05:25:24.051122 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 05:25:24.051126 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 05:25:24.051293 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 05:25:24.052938 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.820803 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.835358 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1472e27dc29f6e2ead18ecedf494f0e4e398d34d875e25f580b29ff4c7dd3350\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c24f60eec34996b40d58f552fa8161520930e11ad64ea27855e67ae60986dec5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.850589 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eaee761-7499-44af-8503-70af0c3216f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17031faff021ed61c4032a6416a718671b93f60566c30248bb7ae3b79a0add07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d9279dbeabfa7f37912552b5d30c4dd1e330c19c3d0e657ab14a54c6b306d9d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e8c2452d09ecaed421f11960c0696e4f341af459d140473451c81fd35791a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.868064 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d164ea855691084e280bbfba7e38b9e0676beb29f86749f9a58074b180e51a8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.870655 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.870695 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.870711 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.870732 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.870747 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:24Z","lastTransitionTime":"2026-02-19T05:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.898421 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:26:20Z\\\",\\\"message\\\":\\\"ient/informers/externalversions/factory.go:141\\\\nI0219 05:26:19.894801 7304 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 05:26:19.895084 7304 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 05:26:19.895385 7304 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 05:26:19.895484 7304 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 05:26:19.895522 7304 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 05:26:19.895545 7304 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 05:26:19.895554 7304 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 05:26:19.895558 7304 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 05:26:19.895561 7304 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 05:26:19.895582 7304 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 05:26:19.895582 7304 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 05:26:19.895591 7304 factory.go:656] Stopping watch factory\\\\nI0219 05:26:19.895608 7304 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:26:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sj2rz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8ff9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.915993 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca9e62de-d3da-441f-872c-041155358f5c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e98f19db4c5c9195d053af37d083f2878b34c43f6dde196474accc5b50e889f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9240f7aeba9def91f54641b0bf6f8d9e6a8e5eb8f7e46b910372c425616e5f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74056ba46dcbd9e83d8283e28be385218df1a2a25007e74cc991865249c81eb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d70ba5cc436129e7388ec3984c811f1c62343fd228657727ffcd694e76452c4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T05:25:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:04Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.934378 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.947765 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4cs9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93b25601-4740-4c9d-9e62-0e7566484633\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fc8a503d4d65dcb676b88e55eec41297071e86c89e201fb7f30154c93834fbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r2sbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4cs9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.961989 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e231950-a365-4a82-9481-05fdac171449\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7whbb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-q5cb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.975759 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.976649 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.976786 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.976911 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.977070 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.977273 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:24Z","lastTransitionTime":"2026-02-19T05:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:24 crc kubenswrapper[5012]: I0219 05:26:24.991382 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72c12f8-ba8a-4e43-aba7-f3c31a59181a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df0de6f5c61ad49bd9c61995bb207b32d4c082b3a107ac72655c4ccfe4fdb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:25:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c6kt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5lt44\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:24Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.006081 5012 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lkrsg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7a04e36-fbaa-4de1-871a-7225433eebb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:25:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T05:26:13Z\\\",\\\"message\\\":\\\"2026-02-19T05:25:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905\\\\n2026-02-19T05:25:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cb2e74ce-78ec-4d27-a01b-23fb081c2905 to /host/opt/cni/bin/\\\\n2026-02-19T05:25:28Z [verbose] multus-daemon started\\\\n2026-02-19T05:25:28Z [verbose] Readiness Indicator file check\\\\n2026-02-19T05:26:13Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T05:25:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T05:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nwlt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T05:25:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lkrsg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:25Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.080952 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.081004 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.081023 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.081047 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.081064 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:25Z","lastTransitionTime":"2026-02-19T05:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.188693 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.188771 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.188786 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.188811 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.188832 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:25Z","lastTransitionTime":"2026-02-19T05:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.291992 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.292030 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.292041 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.292059 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.292072 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:25Z","lastTransitionTime":"2026-02-19T05:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.395928 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.395989 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.395999 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.396016 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.396047 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:25Z","lastTransitionTime":"2026-02-19T05:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.500391 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.501018 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.501172 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.501284 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.506798 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:25Z","lastTransitionTime":"2026-02-19T05:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.610245 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.610358 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.610380 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.610419 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.610441 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:25Z","lastTransitionTime":"2026-02-19T05:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.686355 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 20:15:18.615173137 +0000 UTC Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.713514 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.713621 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.713682 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.713712 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.713771 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:25Z","lastTransitionTime":"2026-02-19T05:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.816708 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.816786 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.816803 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.816832 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.816884 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:25Z","lastTransitionTime":"2026-02-19T05:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.920886 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.920958 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.921008 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.921041 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:25 crc kubenswrapper[5012]: I0219 05:26:25.921064 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:25Z","lastTransitionTime":"2026-02-19T05:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.024462 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.024542 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.024562 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.024592 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.024614 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:26Z","lastTransitionTime":"2026-02-19T05:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.127540 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.127626 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.127645 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.127671 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.127691 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:26Z","lastTransitionTime":"2026-02-19T05:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.232541 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.232599 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.232613 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.232634 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.232650 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:26Z","lastTransitionTime":"2026-02-19T05:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.335802 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.336198 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.336209 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.336225 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.336236 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:26Z","lastTransitionTime":"2026-02-19T05:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.439372 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.439434 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.439452 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.439480 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.439501 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:26Z","lastTransitionTime":"2026-02-19T05:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.542613 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.542693 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.542719 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.542753 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.542775 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:26Z","lastTransitionTime":"2026-02-19T05:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.645589 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.645634 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.645646 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.645664 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.645677 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:26Z","lastTransitionTime":"2026-02-19T05:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.687131 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 15:16:59.248236989 +0000 UTC Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.702528 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.702591 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:26 crc kubenswrapper[5012]: E0219 05:26:26.702710 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.702741 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.702801 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:26 crc kubenswrapper[5012]: E0219 05:26:26.703523 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:26 crc kubenswrapper[5012]: E0219 05:26:26.703795 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:26 crc kubenswrapper[5012]: E0219 05:26:26.702968 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.748055 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.748110 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.748126 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.748147 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.748165 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:26Z","lastTransitionTime":"2026-02-19T05:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.851164 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.851220 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.851240 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.851266 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.851283 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:26Z","lastTransitionTime":"2026-02-19T05:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.954362 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.954433 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.954451 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.954474 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:26 crc kubenswrapper[5012]: I0219 05:26:26.954493 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:26Z","lastTransitionTime":"2026-02-19T05:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.056763 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.056822 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.056843 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.056870 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.056892 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:27Z","lastTransitionTime":"2026-02-19T05:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.160738 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.160794 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.160810 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.160834 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.160851 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:27Z","lastTransitionTime":"2026-02-19T05:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.264117 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.264247 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.264266 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.264292 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.264343 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:27Z","lastTransitionTime":"2026-02-19T05:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.371295 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.371425 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.371452 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.371488 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.371510 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:27Z","lastTransitionTime":"2026-02-19T05:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.475036 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.475111 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.475128 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.475156 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.475173 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:27Z","lastTransitionTime":"2026-02-19T05:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.577994 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.578047 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.578065 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.578089 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.578108 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:27Z","lastTransitionTime":"2026-02-19T05:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.680996 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.681049 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.681068 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.681092 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.681112 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:27Z","lastTransitionTime":"2026-02-19T05:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.687677 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 00:19:14.299697104 +0000 UTC Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.784699 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.784780 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.784807 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.784840 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.784863 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:27Z","lastTransitionTime":"2026-02-19T05:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.888529 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.888581 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.888600 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.888623 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.888640 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:27Z","lastTransitionTime":"2026-02-19T05:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.991963 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.992012 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.992028 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.992051 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:27 crc kubenswrapper[5012]: I0219 05:26:27.992069 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:27Z","lastTransitionTime":"2026-02-19T05:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.094815 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.094946 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.095009 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.095032 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.095049 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:28Z","lastTransitionTime":"2026-02-19T05:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.198363 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.198408 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.198425 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.198446 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.198462 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:28Z","lastTransitionTime":"2026-02-19T05:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.301639 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.301717 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.301771 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.301805 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.301828 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:28Z","lastTransitionTime":"2026-02-19T05:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.404411 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.404469 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.404485 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.404508 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.404526 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:28Z","lastTransitionTime":"2026-02-19T05:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.506940 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.507015 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.507042 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.507073 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.507095 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:28Z","lastTransitionTime":"2026-02-19T05:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.587268 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.587454 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.587424553 +0000 UTC m=+148.620747152 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.587501 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.587778 5012 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.587836 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.587821863 +0000 UTC m=+148.621144462 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.610413 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.610484 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.610508 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.610539 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.610564 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:28Z","lastTransitionTime":"2026-02-19T05:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.688615 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:03:57.28881732 +0000 UTC Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.689198 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.689357 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.689488 5012 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.689523 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.689584 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.689622 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.689642 5012 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.689604 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.689576673 +0000 UTC m=+148.722899272 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.689720 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.689700886 +0000 UTC m=+148.723023485 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.689764 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.689802 5012 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.689826 5012 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.689934 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.689904871 +0000 UTC m=+148.723227500 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.702076 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.702145 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.702228 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.702261 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.702282 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.702503 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.702686 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:28 crc kubenswrapper[5012]: E0219 05:26:28.702783 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.713076 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.713126 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.713143 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.713167 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.713184 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:28Z","lastTransitionTime":"2026-02-19T05:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.816682 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.816744 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.816762 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.816785 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.816803 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:28Z","lastTransitionTime":"2026-02-19T05:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.919976 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.920177 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.920197 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.920224 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:28 crc kubenswrapper[5012]: I0219 05:26:28.920344 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:28Z","lastTransitionTime":"2026-02-19T05:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.023992 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.024056 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.024075 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.024100 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.024118 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:29Z","lastTransitionTime":"2026-02-19T05:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.127842 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.127898 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.127915 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.127945 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.127964 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:29Z","lastTransitionTime":"2026-02-19T05:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.231017 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.231111 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.231134 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.231163 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.231184 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:29Z","lastTransitionTime":"2026-02-19T05:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.335284 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.335376 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.335399 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.335425 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.335443 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:29Z","lastTransitionTime":"2026-02-19T05:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.438101 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.438148 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.438165 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.438185 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.438205 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:29Z","lastTransitionTime":"2026-02-19T05:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.541016 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.541075 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.541092 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.541117 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.541137 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:29Z","lastTransitionTime":"2026-02-19T05:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.644815 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.645452 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.645487 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.645510 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.645527 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:29Z","lastTransitionTime":"2026-02-19T05:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.689567 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:42:09.263022418 +0000 UTC Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.748242 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.748352 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.748373 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.748398 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.748415 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:29Z","lastTransitionTime":"2026-02-19T05:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.850772 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.850826 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.850846 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.850870 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.850886 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:29Z","lastTransitionTime":"2026-02-19T05:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.953583 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.953638 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.953655 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.953679 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:29 crc kubenswrapper[5012]: I0219 05:26:29.953718 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:29Z","lastTransitionTime":"2026-02-19T05:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.056649 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.056699 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.056719 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.056754 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.056788 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:30Z","lastTransitionTime":"2026-02-19T05:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.164439 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.164565 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.164666 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.164703 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.164726 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:30Z","lastTransitionTime":"2026-02-19T05:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.268116 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.268176 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.268192 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.268218 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.268235 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:30Z","lastTransitionTime":"2026-02-19T05:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.370958 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.371011 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.371027 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.371049 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.371065 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:30Z","lastTransitionTime":"2026-02-19T05:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.474267 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.474382 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.474399 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.474425 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.474443 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:30Z","lastTransitionTime":"2026-02-19T05:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.576510 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.576543 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.576551 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.576565 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.576576 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:30Z","lastTransitionTime":"2026-02-19T05:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.679472 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.679498 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.679506 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.679521 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.679531 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:30Z","lastTransitionTime":"2026-02-19T05:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.690678 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 19:43:41.160750748 +0000 UTC Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.705670 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.705704 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:30 crc kubenswrapper[5012]: E0219 05:26:30.705846 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.705916 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.706030 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:30 crc kubenswrapper[5012]: E0219 05:26:30.706094 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:30 crc kubenswrapper[5012]: E0219 05:26:30.706281 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:30 crc kubenswrapper[5012]: E0219 05:26:30.706413 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.782609 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.782700 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.782723 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.782753 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.782776 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:30Z","lastTransitionTime":"2026-02-19T05:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.886630 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.886687 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.886704 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.886727 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.886771 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:30Z","lastTransitionTime":"2026-02-19T05:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.989780 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.989874 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.989897 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.989927 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:30 crc kubenswrapper[5012]: I0219 05:26:30.989951 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:30Z","lastTransitionTime":"2026-02-19T05:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.093498 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.093563 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.093581 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.093609 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.093628 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:31Z","lastTransitionTime":"2026-02-19T05:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.197662 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.197919 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.197939 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.197972 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.197993 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:31Z","lastTransitionTime":"2026-02-19T05:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.301532 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.301612 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.301640 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.301674 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.301697 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:31Z","lastTransitionTime":"2026-02-19T05:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.407150 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.407240 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.407264 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.407295 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.407397 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:31Z","lastTransitionTime":"2026-02-19T05:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.510518 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.510581 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.510595 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.510615 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.510632 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:31Z","lastTransitionTime":"2026-02-19T05:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.613631 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.613673 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.613683 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.613699 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.613708 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:31Z","lastTransitionTime":"2026-02-19T05:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.691743 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 15:05:34.989596351 +0000 UTC Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.716673 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.716805 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.716823 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.716851 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.716869 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:31Z","lastTransitionTime":"2026-02-19T05:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.819955 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.820013 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.820031 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.820056 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.820075 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:31Z","lastTransitionTime":"2026-02-19T05:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.922452 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.922762 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.922947 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.923100 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:31 crc kubenswrapper[5012]: I0219 05:26:31.923241 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:31Z","lastTransitionTime":"2026-02-19T05:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.026365 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.026811 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.026945 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.027102 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.027271 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.130781 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.130841 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.130862 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.130890 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.130907 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.233694 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.233739 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.233758 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.233779 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.233797 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.336911 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.336973 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.336992 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.337019 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.337041 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.441265 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.441415 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.441434 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.441461 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.441480 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.544212 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.544620 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.544778 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.545004 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.545190 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.648446 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.648520 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.648543 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.648577 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.648600 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.674032 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.674086 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.674108 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.674136 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.674161 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.692485 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 17:42:12.703651755 +0000 UTC Feb 19 05:26:32 crc kubenswrapper[5012]: E0219 05:26:32.696068 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.701274 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.701540 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.701678 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.701803 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.701841 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.702465 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.702666 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.701841 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:32 crc kubenswrapper[5012]: E0219 05:26:32.702862 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.701839 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:32 crc kubenswrapper[5012]: E0219 05:26:32.703057 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:32 crc kubenswrapper[5012]: E0219 05:26:32.703383 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:32 crc kubenswrapper[5012]: E0219 05:26:32.703675 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:32 crc kubenswrapper[5012]: E0219 05:26:32.727258 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.732978 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.733028 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.733044 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.733068 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.733142 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: E0219 05:26:32.752362 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.757771 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.757818 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.757835 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.757857 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.757872 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: E0219 05:26:32.777812 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.782779 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.782987 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.783154 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.783345 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.783495 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: E0219 05:26:32.802909 5012 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T05:26:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"30666371-bab3-4856-be6a-83da7a1b9e4e\\\",\\\"systemUUID\\\":\\\"61bedd06-2cec-4dca-b6dd-2763eca77472\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T05:26:32Z is after 2025-08-24T17:21:41Z" Feb 19 05:26:32 crc kubenswrapper[5012]: E0219 05:26:32.803179 5012 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.805643 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.805704 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.805723 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.805748 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.805770 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.908328 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.908722 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.908874 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.909056 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:32 crc kubenswrapper[5012]: I0219 05:26:32.909217 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:32Z","lastTransitionTime":"2026-02-19T05:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.011720 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.011780 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.011798 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.011824 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.011844 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:33Z","lastTransitionTime":"2026-02-19T05:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.115763 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.116022 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.116196 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.116377 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.116586 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:33Z","lastTransitionTime":"2026-02-19T05:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.220331 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.220405 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.220428 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.220464 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.220490 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:33Z","lastTransitionTime":"2026-02-19T05:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.323962 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.324105 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.324127 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.324152 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.324171 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:33Z","lastTransitionTime":"2026-02-19T05:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.428252 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.428347 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.428370 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.428434 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.428454 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:33Z","lastTransitionTime":"2026-02-19T05:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.530965 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.531031 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.531050 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.531076 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.531094 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:33Z","lastTransitionTime":"2026-02-19T05:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.634421 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.634468 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.634483 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.634509 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.634526 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:33Z","lastTransitionTime":"2026-02-19T05:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.692876 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 09:22:17.222442892 +0000 UTC Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.736887 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.736941 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.736957 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.736979 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.737031 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:33Z","lastTransitionTime":"2026-02-19T05:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.840480 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.840526 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.840542 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.840566 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.840585 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:33Z","lastTransitionTime":"2026-02-19T05:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.944135 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.944206 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.944222 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.944690 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:33 crc kubenswrapper[5012]: I0219 05:26:33.944754 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:33Z","lastTransitionTime":"2026-02-19T05:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.048191 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.048245 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.048262 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.048289 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.048332 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:34Z","lastTransitionTime":"2026-02-19T05:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.151426 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.151524 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.151548 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.151582 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.151611 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:34Z","lastTransitionTime":"2026-02-19T05:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.255030 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.255089 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.255106 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.255131 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.255148 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:34Z","lastTransitionTime":"2026-02-19T05:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.358855 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.358922 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.358943 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.358969 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.358989 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:34Z","lastTransitionTime":"2026-02-19T05:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.462651 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.462743 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.462770 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.462804 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.462838 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:34Z","lastTransitionTime":"2026-02-19T05:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.564792 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.564827 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.564834 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.564848 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.564857 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:34Z","lastTransitionTime":"2026-02-19T05:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.666599 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.666633 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.666641 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.666654 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.666662 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:34Z","lastTransitionTime":"2026-02-19T05:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.694268 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 21:54:32.972689843 +0000 UTC Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.702588 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.702729 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:34 crc kubenswrapper[5012]: E0219 05:26:34.702842 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.702928 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.702962 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:34 crc kubenswrapper[5012]: E0219 05:26:34.703085 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:34 crc kubenswrapper[5012]: E0219 05:26:34.703203 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:34 crc kubenswrapper[5012]: E0219 05:26:34.703318 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.754423 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=70.754404666 podStartE2EDuration="1m10.754404666s" podCreationTimestamp="2026-02-19 05:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:34.737787381 +0000 UTC m=+90.771109950" watchObservedRunningTime="2026-02-19 05:26:34.754404666 +0000 UTC m=+90.787727235" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.770295 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.770341 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.770350 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.770364 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.770376 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:34Z","lastTransitionTime":"2026-02-19T05:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.819187 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-wv2tq" podStartSLOduration=68.819170211 podStartE2EDuration="1m8.819170211s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:34.818501114 +0000 UTC m=+90.851823683" watchObservedRunningTime="2026-02-19 05:26:34.819170211 +0000 UTC m=+90.852492780" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.848571 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gncl6" podStartSLOduration=68.848544772 podStartE2EDuration="1m8.848544772s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:34.834640026 +0000 UTC m=+90.867962635" watchObservedRunningTime="2026-02-19 05:26:34.848544772 +0000 UTC m=+90.881867381" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.849079 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-sh856" podStartSLOduration=68.849072205 podStartE2EDuration="1m8.849072205s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:34.8484695 +0000 UTC m=+90.881792119" watchObservedRunningTime="2026-02-19 05:26:34.849072205 +0000 UTC m=+90.882394804" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.868437 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=70.868389988 podStartE2EDuration="1m10.868389988s" podCreationTimestamp="2026-02-19 05:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:34.867707991 +0000 UTC m=+90.901030600" watchObservedRunningTime="2026-02-19 05:26:34.868389988 +0000 UTC m=+90.901712597" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.874023 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.874086 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.874106 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.874135 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.874154 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:34Z","lastTransitionTime":"2026-02-19T05:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.937986 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=40.937874884 podStartE2EDuration="40.937874884s" podCreationTimestamp="2026-02-19 05:25:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:34.936297954 +0000 UTC m=+90.969620563" watchObservedRunningTime="2026-02-19 05:26:34.937874884 +0000 UTC m=+90.971197493" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.980856 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.980908 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.980923 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.980940 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.980958 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:34Z","lastTransitionTime":"2026-02-19T05:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:34 crc kubenswrapper[5012]: I0219 05:26:34.991196 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-4cs9h" podStartSLOduration=69.991137855 podStartE2EDuration="1m9.991137855s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:34.972007966 +0000 UTC m=+91.005330615" watchObservedRunningTime="2026-02-19 05:26:34.991137855 +0000 UTC m=+91.024460464" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.042197 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podStartSLOduration=70.042170539 podStartE2EDuration="1m10.042170539s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:35.026401036 +0000 UTC m=+91.059723635" watchObservedRunningTime="2026-02-19 05:26:35.042170539 +0000 UTC m=+91.075493148" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.042636 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-lkrsg" podStartSLOduration=70.042628371 podStartE2EDuration="1m10.042628371s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:35.041767169 +0000 UTC m=+91.075089778" watchObservedRunningTime="2026-02-19 05:26:35.042628371 +0000 UTC m=+91.075950980" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.083789 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.083850 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.083867 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.083890 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.083911 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:35Z","lastTransitionTime":"2026-02-19T05:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.186390 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.186448 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.186465 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.186490 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.186508 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:35Z","lastTransitionTime":"2026-02-19T05:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.289611 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.289667 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.289684 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.289705 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.289722 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:35Z","lastTransitionTime":"2026-02-19T05:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.392712 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.392754 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.392770 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.392792 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.392809 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:35Z","lastTransitionTime":"2026-02-19T05:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.495784 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.495847 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.495866 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.495895 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.495919 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:35Z","lastTransitionTime":"2026-02-19T05:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.597662 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.597715 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.597725 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.597744 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.597758 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:35Z","lastTransitionTime":"2026-02-19T05:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.694473 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 14:23:44.142385043 +0000 UTC Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.700417 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.700453 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.700462 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.700480 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.700491 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:35Z","lastTransitionTime":"2026-02-19T05:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.703900 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:26:35 crc kubenswrapper[5012]: E0219 05:26:35.704176 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.805992 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.807883 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.808083 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.808222 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.808376 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:35Z","lastTransitionTime":"2026-02-19T05:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.911137 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.911207 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.911224 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.911252 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:35 crc kubenswrapper[5012]: I0219 05:26:35.911270 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:35Z","lastTransitionTime":"2026-02-19T05:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.014030 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.014140 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.014163 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.014195 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.014218 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:36Z","lastTransitionTime":"2026-02-19T05:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.117098 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.117149 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.117161 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.117179 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.117192 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:36Z","lastTransitionTime":"2026-02-19T05:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.220655 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.220716 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.220732 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.220756 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.220780 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:36Z","lastTransitionTime":"2026-02-19T05:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.324244 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.324356 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.324375 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.324400 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.324420 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:36Z","lastTransitionTime":"2026-02-19T05:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.431871 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.432504 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.432542 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.432573 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.432598 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:36Z","lastTransitionTime":"2026-02-19T05:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.535688 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.535742 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.535759 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.535783 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.535839 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:36Z","lastTransitionTime":"2026-02-19T05:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.638244 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.638292 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.638343 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.638364 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.638381 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:36Z","lastTransitionTime":"2026-02-19T05:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.695395 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:53:38.303301953 +0000 UTC Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.703634 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:36 crc kubenswrapper[5012]: E0219 05:26:36.703777 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.703787 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.703839 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.703851 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:36 crc kubenswrapper[5012]: E0219 05:26:36.703953 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:36 crc kubenswrapper[5012]: E0219 05:26:36.703968 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:36 crc kubenswrapper[5012]: E0219 05:26:36.704107 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.717188 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.741180 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.741235 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.741250 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.741266 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.741275 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:36Z","lastTransitionTime":"2026-02-19T05:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.843836 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.843878 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.843888 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.843929 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.843941 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:36Z","lastTransitionTime":"2026-02-19T05:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.947615 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.947677 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.947694 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.947718 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:36 crc kubenswrapper[5012]: I0219 05:26:36.947735 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:36Z","lastTransitionTime":"2026-02-19T05:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.051009 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.051062 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.051080 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.051104 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.051122 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:37Z","lastTransitionTime":"2026-02-19T05:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.154288 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.154368 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.154384 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.154408 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.154461 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:37Z","lastTransitionTime":"2026-02-19T05:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.257010 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.257058 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.257076 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.257099 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.257116 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:37Z","lastTransitionTime":"2026-02-19T05:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.360011 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.360078 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.360102 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.360134 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.360177 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:37Z","lastTransitionTime":"2026-02-19T05:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.463252 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.463360 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.463385 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.463416 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.463437 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:37Z","lastTransitionTime":"2026-02-19T05:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.566578 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.566650 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.566671 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.566699 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.566720 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:37Z","lastTransitionTime":"2026-02-19T05:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.669664 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.669701 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.669711 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.669728 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.669740 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:37Z","lastTransitionTime":"2026-02-19T05:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.695832 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 07:40:34.606787878 +0000 UTC Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.772503 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.772585 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.772613 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.772642 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.772665 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:37Z","lastTransitionTime":"2026-02-19T05:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.875652 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.875699 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.875711 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.875728 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.875740 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:37Z","lastTransitionTime":"2026-02-19T05:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.978765 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.978806 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.978815 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.978830 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:37 crc kubenswrapper[5012]: I0219 05:26:37.978838 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:37Z","lastTransitionTime":"2026-02-19T05:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.081796 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.081850 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.081865 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.081887 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.081905 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:38Z","lastTransitionTime":"2026-02-19T05:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.184825 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.184890 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.184907 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.184932 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.184952 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:38Z","lastTransitionTime":"2026-02-19T05:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.287493 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.287554 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.287571 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.287595 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.287612 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:38Z","lastTransitionTime":"2026-02-19T05:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.390833 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.390895 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.390910 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.390936 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.390958 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:38Z","lastTransitionTime":"2026-02-19T05:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.493734 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.493798 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.493816 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.493843 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.493861 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:38Z","lastTransitionTime":"2026-02-19T05:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.596127 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.596179 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.596191 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.596214 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.596226 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:38Z","lastTransitionTime":"2026-02-19T05:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.695956 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 05:50:02.119644347 +0000 UTC Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.699150 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.699226 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.699252 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.699282 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.699354 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:38Z","lastTransitionTime":"2026-02-19T05:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.702725 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.702745 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.702768 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:38 crc kubenswrapper[5012]: E0219 05:26:38.702870 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:38 crc kubenswrapper[5012]: E0219 05:26:38.702983 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.702999 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:38 crc kubenswrapper[5012]: E0219 05:26:38.703056 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:38 crc kubenswrapper[5012]: E0219 05:26:38.703163 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.801763 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.801809 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.801821 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.801839 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.801851 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:38Z","lastTransitionTime":"2026-02-19T05:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.905192 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.905253 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.905263 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.905276 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:38 crc kubenswrapper[5012]: I0219 05:26:38.905287 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:38Z","lastTransitionTime":"2026-02-19T05:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.008768 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.008816 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.008827 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.008844 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.008856 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:39Z","lastTransitionTime":"2026-02-19T05:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.111647 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.111681 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.111689 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.111704 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.111713 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:39Z","lastTransitionTime":"2026-02-19T05:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.214085 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.214133 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.214145 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.214163 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.214176 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:39Z","lastTransitionTime":"2026-02-19T05:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.316964 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.316998 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.317006 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.317018 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.317027 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:39Z","lastTransitionTime":"2026-02-19T05:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.418282 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.418387 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.418416 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.418439 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.418456 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:39Z","lastTransitionTime":"2026-02-19T05:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.521585 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.521654 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.521680 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.521707 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.521725 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:39Z","lastTransitionTime":"2026-02-19T05:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.625069 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.625119 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.625130 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.625148 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.625158 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:39Z","lastTransitionTime":"2026-02-19T05:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.697071 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 18:04:07.201982104 +0000 UTC Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.727749 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.727822 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.727841 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.727863 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.727879 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:39Z","lastTransitionTime":"2026-02-19T05:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.830475 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.830535 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.830559 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.830587 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.830611 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:39Z","lastTransitionTime":"2026-02-19T05:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.932836 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.932919 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.932935 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.932957 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:39 crc kubenswrapper[5012]: I0219 05:26:39.932974 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:39Z","lastTransitionTime":"2026-02-19T05:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.035461 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.035508 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.035524 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.035545 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.035562 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:40Z","lastTransitionTime":"2026-02-19T05:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.138444 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.138484 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.138502 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.138523 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.138540 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:40Z","lastTransitionTime":"2026-02-19T05:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.241295 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.241368 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.241385 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.241406 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.241423 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:40Z","lastTransitionTime":"2026-02-19T05:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.344250 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.344333 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.344357 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.344387 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.344404 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:40Z","lastTransitionTime":"2026-02-19T05:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.446945 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.446997 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.447009 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.447026 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.447037 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:40Z","lastTransitionTime":"2026-02-19T05:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.551095 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.551150 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.551167 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.551195 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.551215 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:40Z","lastTransitionTime":"2026-02-19T05:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.653536 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.653597 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.653615 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.653639 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.653657 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:40Z","lastTransitionTime":"2026-02-19T05:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.697515 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 08:11:24.266614667 +0000 UTC Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.701912 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.702021 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:40 crc kubenswrapper[5012]: E0219 05:26:40.702143 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.702179 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.702197 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:40 crc kubenswrapper[5012]: E0219 05:26:40.702410 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:40 crc kubenswrapper[5012]: E0219 05:26:40.702449 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:40 crc kubenswrapper[5012]: E0219 05:26:40.702571 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.757159 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.757223 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.757240 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.757265 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.757282 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:40Z","lastTransitionTime":"2026-02-19T05:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.860943 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.860995 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.861006 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.861040 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.861052 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:40Z","lastTransitionTime":"2026-02-19T05:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.963694 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.963745 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.963758 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.963775 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:40 crc kubenswrapper[5012]: I0219 05:26:40.963787 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:40Z","lastTransitionTime":"2026-02-19T05:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.066458 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.066505 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.066516 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.066531 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.066541 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:41Z","lastTransitionTime":"2026-02-19T05:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.170212 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.170272 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.170289 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.170344 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.170364 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:41Z","lastTransitionTime":"2026-02-19T05:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.273448 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.273514 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.273528 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.273549 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.273565 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:41Z","lastTransitionTime":"2026-02-19T05:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.376652 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.376720 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.376742 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.376771 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.376793 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:41Z","lastTransitionTime":"2026-02-19T05:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.479386 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.479433 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.479443 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.479461 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.479473 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:41Z","lastTransitionTime":"2026-02-19T05:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.582253 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.582355 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.582381 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.582405 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.582421 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:41Z","lastTransitionTime":"2026-02-19T05:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.686413 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.686489 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.686509 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.686534 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.686553 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:41Z","lastTransitionTime":"2026-02-19T05:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.698630 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 05:15:01.426252536 +0000 UTC Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.789867 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.789978 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.790003 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.790034 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.790057 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:41Z","lastTransitionTime":"2026-02-19T05:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.892737 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.892786 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.892798 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.892816 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.892828 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:41Z","lastTransitionTime":"2026-02-19T05:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.995296 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.995383 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.995401 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.995425 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:41 crc kubenswrapper[5012]: I0219 05:26:41.995442 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:41Z","lastTransitionTime":"2026-02-19T05:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.098842 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.098899 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.098917 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.098943 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.098962 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:42Z","lastTransitionTime":"2026-02-19T05:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.201604 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.201675 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.201700 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.201731 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.201758 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:42Z","lastTransitionTime":"2026-02-19T05:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.305626 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.305699 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.305717 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.305744 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.305763 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:42Z","lastTransitionTime":"2026-02-19T05:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.412919 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.412994 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.413051 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.413072 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.413087 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:42Z","lastTransitionTime":"2026-02-19T05:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.516023 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.516076 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.516094 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.516115 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.516131 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:42Z","lastTransitionTime":"2026-02-19T05:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.618561 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.618624 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.618641 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.618664 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.618712 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:42Z","lastTransitionTime":"2026-02-19T05:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.698888 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 18:30:17.631826358 +0000 UTC Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.702371 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.702451 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:42 crc kubenswrapper[5012]: E0219 05:26:42.702521 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:42 crc kubenswrapper[5012]: E0219 05:26:42.702612 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.702454 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.702692 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:42 crc kubenswrapper[5012]: E0219 05:26:42.702837 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:42 crc kubenswrapper[5012]: E0219 05:26:42.702929 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.721183 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.721453 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.721590 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.721730 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.721863 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:42Z","lastTransitionTime":"2026-02-19T05:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.823962 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.824008 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.824025 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.824045 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.824061 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:42Z","lastTransitionTime":"2026-02-19T05:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.926256 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.926514 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.926656 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.926817 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:42 crc kubenswrapper[5012]: I0219 05:26:42.926940 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:42Z","lastTransitionTime":"2026-02-19T05:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.029795 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.029852 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.029870 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.029894 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.029912 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:43Z","lastTransitionTime":"2026-02-19T05:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.037279 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.037373 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.037398 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.037430 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.037457 5012 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T05:26:43Z","lastTransitionTime":"2026-02-19T05:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.102377 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t"] Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.102943 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.105649 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.106078 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.106425 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.106565 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.142411 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=7.1423765790000004 podStartE2EDuration="7.142376579s" podCreationTimestamp="2026-02-19 05:26:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:43.12242083 +0000 UTC m=+99.155743429" watchObservedRunningTime="2026-02-19 05:26:43.142376579 +0000 UTC m=+99.175699188" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.259667 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.260163 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.260433 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.260645 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.260964 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.361530 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.361796 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.362088 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.362385 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.362653 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.361662 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.362236 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.363873 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.372184 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.392127 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9d33f25-d1bf-4118-b7f1-998bcd6eb548-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6k28t\" (UID: \"f9d33f25-d1bf-4118-b7f1-998bcd6eb548\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.426546 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" Feb 19 05:26:43 crc kubenswrapper[5012]: W0219 05:26:43.448319 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9d33f25_d1bf_4118_b7f1_998bcd6eb548.slice/crio-a226133877922b03dd5678c89d1c8cc750b4a83353ad7c5a8a4e9429d0367f51 WatchSource:0}: Error finding container a226133877922b03dd5678c89d1c8cc750b4a83353ad7c5a8a4e9429d0367f51: Status 404 returned error can't find the container with id a226133877922b03dd5678c89d1c8cc750b4a83353ad7c5a8a4e9429d0367f51 Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.699761 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:36:13.329270916 +0000 UTC Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.700542 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 19 05:26:43 crc kubenswrapper[5012]: I0219 05:26:43.711338 5012 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 05:26:44 crc kubenswrapper[5012]: I0219 05:26:44.430659 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" event={"ID":"f9d33f25-d1bf-4118-b7f1-998bcd6eb548","Type":"ContainerStarted","Data":"d53c173b9107f3ca791defd755a07197cda9ce15693ddeb15e24fad36dee93c3"} Feb 19 05:26:44 crc kubenswrapper[5012]: I0219 05:26:44.430715 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" event={"ID":"f9d33f25-d1bf-4118-b7f1-998bcd6eb548","Type":"ContainerStarted","Data":"a226133877922b03dd5678c89d1c8cc750b4a83353ad7c5a8a4e9429d0367f51"} Feb 19 05:26:44 crc kubenswrapper[5012]: I0219 05:26:44.474288 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:44 crc kubenswrapper[5012]: E0219 05:26:44.474487 5012 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:26:44 crc kubenswrapper[5012]: E0219 05:26:44.474582 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs podName:2e231950-a365-4a82-9481-05fdac171449 nodeName:}" failed. No retries permitted until 2026-02-19 05:27:48.474558018 +0000 UTC m=+164.507880627 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs") pod "network-metrics-daemon-q5cb2" (UID: "2e231950-a365-4a82-9481-05fdac171449") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 05:26:44 crc kubenswrapper[5012]: I0219 05:26:44.702592 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:44 crc kubenswrapper[5012]: I0219 05:26:44.702653 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:44 crc kubenswrapper[5012]: I0219 05:26:44.702741 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:44 crc kubenswrapper[5012]: E0219 05:26:44.702918 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:44 crc kubenswrapper[5012]: I0219 05:26:44.703516 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:44 crc kubenswrapper[5012]: E0219 05:26:44.704954 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:44 crc kubenswrapper[5012]: E0219 05:26:44.705123 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:44 crc kubenswrapper[5012]: E0219 05:26:44.705125 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:45 crc kubenswrapper[5012]: I0219 05:26:45.722599 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6k28t" podStartSLOduration=80.722566646 podStartE2EDuration="1m20.722566646s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:44.454875715 +0000 UTC m=+100.488198324" watchObservedRunningTime="2026-02-19 05:26:45.722566646 +0000 UTC m=+101.755889255" Feb 19 05:26:45 crc kubenswrapper[5012]: I0219 05:26:45.724111 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 19 05:26:46 crc kubenswrapper[5012]: I0219 05:26:46.702230 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:46 crc kubenswrapper[5012]: I0219 05:26:46.702418 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:46 crc kubenswrapper[5012]: I0219 05:26:46.702566 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:46 crc kubenswrapper[5012]: I0219 05:26:46.702621 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:46 crc kubenswrapper[5012]: E0219 05:26:46.702652 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:46 crc kubenswrapper[5012]: E0219 05:26:46.702874 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:46 crc kubenswrapper[5012]: E0219 05:26:46.703006 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:46 crc kubenswrapper[5012]: E0219 05:26:46.703451 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:47 crc kubenswrapper[5012]: I0219 05:26:47.704123 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:26:47 crc kubenswrapper[5012]: E0219 05:26:47.704719 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" Feb 19 05:26:48 crc kubenswrapper[5012]: I0219 05:26:48.702803 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:48 crc kubenswrapper[5012]: I0219 05:26:48.702881 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:48 crc kubenswrapper[5012]: I0219 05:26:48.702938 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:48 crc kubenswrapper[5012]: E0219 05:26:48.703837 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:48 crc kubenswrapper[5012]: E0219 05:26:48.703967 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:48 crc kubenswrapper[5012]: E0219 05:26:48.704174 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:48 crc kubenswrapper[5012]: I0219 05:26:48.704395 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:48 crc kubenswrapper[5012]: E0219 05:26:48.704615 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:50 crc kubenswrapper[5012]: I0219 05:26:50.702438 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:50 crc kubenswrapper[5012]: I0219 05:26:50.702487 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:50 crc kubenswrapper[5012]: E0219 05:26:50.702602 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:50 crc kubenswrapper[5012]: I0219 05:26:50.702631 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:50 crc kubenswrapper[5012]: I0219 05:26:50.702685 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:50 crc kubenswrapper[5012]: E0219 05:26:50.702777 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:50 crc kubenswrapper[5012]: E0219 05:26:50.702843 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:50 crc kubenswrapper[5012]: E0219 05:26:50.702914 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:52 crc kubenswrapper[5012]: I0219 05:26:52.701870 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:52 crc kubenswrapper[5012]: I0219 05:26:52.701912 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:52 crc kubenswrapper[5012]: I0219 05:26:52.701982 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:52 crc kubenswrapper[5012]: I0219 05:26:52.701987 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:52 crc kubenswrapper[5012]: E0219 05:26:52.702363 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:52 crc kubenswrapper[5012]: E0219 05:26:52.702745 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:52 crc kubenswrapper[5012]: E0219 05:26:52.702526 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:52 crc kubenswrapper[5012]: E0219 05:26:52.702881 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:54 crc kubenswrapper[5012]: I0219 05:26:54.702076 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:54 crc kubenswrapper[5012]: I0219 05:26:54.702134 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:54 crc kubenswrapper[5012]: I0219 05:26:54.702087 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:54 crc kubenswrapper[5012]: I0219 05:26:54.702221 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:54 crc kubenswrapper[5012]: E0219 05:26:54.702229 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:54 crc kubenswrapper[5012]: E0219 05:26:54.704526 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:54 crc kubenswrapper[5012]: E0219 05:26:54.704893 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:54 crc kubenswrapper[5012]: E0219 05:26:54.704994 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:54 crc kubenswrapper[5012]: I0219 05:26:54.756268 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=9.756237737 podStartE2EDuration="9.756237737s" podCreationTimestamp="2026-02-19 05:26:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:26:54.745931824 +0000 UTC m=+110.779254433" watchObservedRunningTime="2026-02-19 05:26:54.756237737 +0000 UTC m=+110.789560336" Feb 19 05:26:56 crc kubenswrapper[5012]: I0219 05:26:56.701876 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:56 crc kubenswrapper[5012]: I0219 05:26:56.701939 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:56 crc kubenswrapper[5012]: E0219 05:26:56.702057 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:56 crc kubenswrapper[5012]: I0219 05:26:56.702147 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:56 crc kubenswrapper[5012]: I0219 05:26:56.702172 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:56 crc kubenswrapper[5012]: E0219 05:26:56.702390 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:56 crc kubenswrapper[5012]: E0219 05:26:56.702549 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:56 crc kubenswrapper[5012]: E0219 05:26:56.702669 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:58 crc kubenswrapper[5012]: I0219 05:26:58.702448 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:26:58 crc kubenswrapper[5012]: I0219 05:26:58.702573 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:26:58 crc kubenswrapper[5012]: I0219 05:26:58.702655 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:26:58 crc kubenswrapper[5012]: I0219 05:26:58.702859 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:26:58 crc kubenswrapper[5012]: E0219 05:26:58.702834 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:26:58 crc kubenswrapper[5012]: E0219 05:26:58.703064 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:26:58 crc kubenswrapper[5012]: E0219 05:26:58.703340 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:26:58 crc kubenswrapper[5012]: E0219 05:26:58.703444 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:26:59 crc kubenswrapper[5012]: I0219 05:26:59.703062 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:26:59 crc kubenswrapper[5012]: E0219 05:26:59.703283 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8ff9w_openshift-ovn-kubernetes(0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" Feb 19 05:27:00 crc kubenswrapper[5012]: I0219 05:27:00.487976 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lkrsg_e7a04e36-fbaa-4de1-871a-7225433eebb0/kube-multus/1.log" Feb 19 05:27:00 crc kubenswrapper[5012]: I0219 05:27:00.489253 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lkrsg_e7a04e36-fbaa-4de1-871a-7225433eebb0/kube-multus/0.log" Feb 19 05:27:00 crc kubenswrapper[5012]: I0219 05:27:00.489350 5012 generic.go:334] "Generic (PLEG): container finished" podID="e7a04e36-fbaa-4de1-871a-7225433eebb0" containerID="fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd" exitCode=1 Feb 19 05:27:00 crc kubenswrapper[5012]: I0219 05:27:00.489392 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lkrsg" event={"ID":"e7a04e36-fbaa-4de1-871a-7225433eebb0","Type":"ContainerDied","Data":"fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd"} Feb 19 05:27:00 crc kubenswrapper[5012]: I0219 05:27:00.489438 5012 scope.go:117] "RemoveContainer" containerID="10b5c19187f91d0f3fd924e34bcf174600ec2dc465bd26ab4280113692658061" Feb 19 05:27:00 crc kubenswrapper[5012]: I0219 05:27:00.490018 5012 scope.go:117] "RemoveContainer" containerID="fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd" Feb 19 05:27:00 crc kubenswrapper[5012]: E0219 05:27:00.490443 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-lkrsg_openshift-multus(e7a04e36-fbaa-4de1-871a-7225433eebb0)\"" pod="openshift-multus/multus-lkrsg" podUID="e7a04e36-fbaa-4de1-871a-7225433eebb0" Feb 19 05:27:00 crc kubenswrapper[5012]: I0219 05:27:00.702421 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:00 crc kubenswrapper[5012]: I0219 05:27:00.702655 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:00 crc kubenswrapper[5012]: I0219 05:27:00.702700 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:00 crc kubenswrapper[5012]: E0219 05:27:00.702811 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:27:00 crc kubenswrapper[5012]: I0219 05:27:00.702866 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:00 crc kubenswrapper[5012]: E0219 05:27:00.703037 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:27:00 crc kubenswrapper[5012]: E0219 05:27:00.703165 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:27:00 crc kubenswrapper[5012]: E0219 05:27:00.703221 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:27:01 crc kubenswrapper[5012]: I0219 05:27:01.494946 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lkrsg_e7a04e36-fbaa-4de1-871a-7225433eebb0/kube-multus/1.log" Feb 19 05:27:02 crc kubenswrapper[5012]: I0219 05:27:02.702429 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:02 crc kubenswrapper[5012]: I0219 05:27:02.702458 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:02 crc kubenswrapper[5012]: E0219 05:27:02.703036 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:27:02 crc kubenswrapper[5012]: I0219 05:27:02.702534 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:02 crc kubenswrapper[5012]: I0219 05:27:02.702473 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:02 crc kubenswrapper[5012]: E0219 05:27:02.703164 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:27:02 crc kubenswrapper[5012]: E0219 05:27:02.703467 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:27:02 crc kubenswrapper[5012]: E0219 05:27:02.703698 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:27:04 crc kubenswrapper[5012]: E0219 05:27:04.646867 5012 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 19 05:27:04 crc kubenswrapper[5012]: I0219 05:27:04.702005 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:04 crc kubenswrapper[5012]: I0219 05:27:04.702088 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:04 crc kubenswrapper[5012]: I0219 05:27:04.702155 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:04 crc kubenswrapper[5012]: E0219 05:27:04.703808 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:27:04 crc kubenswrapper[5012]: I0219 05:27:04.703843 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:04 crc kubenswrapper[5012]: E0219 05:27:04.704031 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:27:04 crc kubenswrapper[5012]: E0219 05:27:04.704171 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:27:04 crc kubenswrapper[5012]: E0219 05:27:04.704510 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:27:04 crc kubenswrapper[5012]: E0219 05:27:04.833179 5012 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 05:27:06 crc kubenswrapper[5012]: I0219 05:27:06.702258 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:06 crc kubenswrapper[5012]: I0219 05:27:06.702435 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:06 crc kubenswrapper[5012]: E0219 05:27:06.703557 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:27:06 crc kubenswrapper[5012]: I0219 05:27:06.702667 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:06 crc kubenswrapper[5012]: E0219 05:27:06.703638 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:27:06 crc kubenswrapper[5012]: I0219 05:27:06.702514 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:06 crc kubenswrapper[5012]: E0219 05:27:06.703730 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:27:06 crc kubenswrapper[5012]: E0219 05:27:06.703801 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:27:08 crc kubenswrapper[5012]: I0219 05:27:08.701986 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:08 crc kubenswrapper[5012]: I0219 05:27:08.702017 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:08 crc kubenswrapper[5012]: I0219 05:27:08.702159 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:08 crc kubenswrapper[5012]: E0219 05:27:08.702263 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:27:08 crc kubenswrapper[5012]: I0219 05:27:08.702282 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:08 crc kubenswrapper[5012]: E0219 05:27:08.702456 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:27:08 crc kubenswrapper[5012]: E0219 05:27:08.702655 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:27:08 crc kubenswrapper[5012]: E0219 05:27:08.702736 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:27:09 crc kubenswrapper[5012]: E0219 05:27:09.835177 5012 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 05:27:10 crc kubenswrapper[5012]: I0219 05:27:10.702167 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:10 crc kubenswrapper[5012]: I0219 05:27:10.702282 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:10 crc kubenswrapper[5012]: E0219 05:27:10.702435 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:27:10 crc kubenswrapper[5012]: I0219 05:27:10.702504 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:10 crc kubenswrapper[5012]: I0219 05:27:10.702570 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:10 crc kubenswrapper[5012]: E0219 05:27:10.702662 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:27:10 crc kubenswrapper[5012]: E0219 05:27:10.702868 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:27:10 crc kubenswrapper[5012]: E0219 05:27:10.703058 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:27:11 crc kubenswrapper[5012]: I0219 05:27:11.703217 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:27:12 crc kubenswrapper[5012]: I0219 05:27:12.536106 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/3.log" Feb 19 05:27:12 crc kubenswrapper[5012]: I0219 05:27:12.541114 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerStarted","Data":"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb"} Feb 19 05:27:12 crc kubenswrapper[5012]: I0219 05:27:12.541703 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:27:12 crc kubenswrapper[5012]: I0219 05:27:12.585870 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podStartSLOduration=106.585849484 podStartE2EDuration="1m46.585849484s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:12.585070744 +0000 UTC m=+128.618393343" watchObservedRunningTime="2026-02-19 05:27:12.585849484 +0000 UTC m=+128.619172083" Feb 19 05:27:12 crc kubenswrapper[5012]: I0219 05:27:12.651676 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-q5cb2"] Feb 19 05:27:12 crc kubenswrapper[5012]: I0219 05:27:12.651855 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:12 crc kubenswrapper[5012]: E0219 05:27:12.651992 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:27:12 crc kubenswrapper[5012]: I0219 05:27:12.702963 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:12 crc kubenswrapper[5012]: I0219 05:27:12.703032 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:12 crc kubenswrapper[5012]: I0219 05:27:12.703050 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:12 crc kubenswrapper[5012]: E0219 05:27:12.703159 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:27:12 crc kubenswrapper[5012]: E0219 05:27:12.703326 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:27:12 crc kubenswrapper[5012]: E0219 05:27:12.703406 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:27:14 crc kubenswrapper[5012]: I0219 05:27:14.702092 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:14 crc kubenswrapper[5012]: I0219 05:27:14.702128 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:14 crc kubenswrapper[5012]: I0219 05:27:14.702107 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:14 crc kubenswrapper[5012]: E0219 05:27:14.703963 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:27:14 crc kubenswrapper[5012]: I0219 05:27:14.704008 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:14 crc kubenswrapper[5012]: E0219 05:27:14.704140 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:27:14 crc kubenswrapper[5012]: E0219 05:27:14.704257 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:27:14 crc kubenswrapper[5012]: E0219 05:27:14.704396 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:27:14 crc kubenswrapper[5012]: I0219 05:27:14.704949 5012 scope.go:117] "RemoveContainer" containerID="fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd" Feb 19 05:27:14 crc kubenswrapper[5012]: E0219 05:27:14.835660 5012 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 05:27:15 crc kubenswrapper[5012]: I0219 05:27:15.552829 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lkrsg_e7a04e36-fbaa-4de1-871a-7225433eebb0/kube-multus/1.log" Feb 19 05:27:15 crc kubenswrapper[5012]: I0219 05:27:15.552909 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lkrsg" event={"ID":"e7a04e36-fbaa-4de1-871a-7225433eebb0","Type":"ContainerStarted","Data":"9dee99959c58361002b098beb811940fb74ac9f7c81b432ebe5142128b4aec05"} Feb 19 05:27:16 crc kubenswrapper[5012]: I0219 05:27:16.702694 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:16 crc kubenswrapper[5012]: I0219 05:27:16.702747 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:16 crc kubenswrapper[5012]: I0219 05:27:16.702691 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:16 crc kubenswrapper[5012]: E0219 05:27:16.702874 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:27:16 crc kubenswrapper[5012]: E0219 05:27:16.702996 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:27:16 crc kubenswrapper[5012]: I0219 05:27:16.703060 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:16 crc kubenswrapper[5012]: E0219 05:27:16.703100 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:27:16 crc kubenswrapper[5012]: E0219 05:27:16.703240 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:27:18 crc kubenswrapper[5012]: I0219 05:27:18.701985 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:18 crc kubenswrapper[5012]: I0219 05:27:18.702258 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:18 crc kubenswrapper[5012]: E0219 05:27:18.702620 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 05:27:18 crc kubenswrapper[5012]: I0219 05:27:18.702329 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:18 crc kubenswrapper[5012]: I0219 05:27:18.702286 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:18 crc kubenswrapper[5012]: E0219 05:27:18.702728 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 05:27:18 crc kubenswrapper[5012]: E0219 05:27:18.702794 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-q5cb2" podUID="2e231950-a365-4a82-9481-05fdac171449" Feb 19 05:27:18 crc kubenswrapper[5012]: E0219 05:27:18.702967 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 05:27:20 crc kubenswrapper[5012]: I0219 05:27:20.701847 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:20 crc kubenswrapper[5012]: I0219 05:27:20.702422 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:20 crc kubenswrapper[5012]: I0219 05:27:20.702470 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:20 crc kubenswrapper[5012]: I0219 05:27:20.703056 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:20 crc kubenswrapper[5012]: I0219 05:27:20.705622 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 05:27:20 crc kubenswrapper[5012]: I0219 05:27:20.705902 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 05:27:20 crc kubenswrapper[5012]: I0219 05:27:20.706286 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 05:27:20 crc kubenswrapper[5012]: I0219 05:27:20.706565 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 05:27:20 crc kubenswrapper[5012]: I0219 05:27:20.706922 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 05:27:20 crc kubenswrapper[5012]: I0219 05:27:20.708163 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.810360 5012 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.863960 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hjmb9"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.864779 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.869394 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.869987 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.870002 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-6qvzq"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.870907 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.873029 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.873367 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.873541 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.872872 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ntrlp"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.874404 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.874483 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.874958 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.874578 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.874595 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.875193 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.879406 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-thnmn"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.880144 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.885069 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.886143 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.887918 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.890190 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.906895 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.907190 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.907744 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.907783 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.908018 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.908347 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.908676 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.909220 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.906112 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.913132 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.920510 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.920957 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.922112 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.926564 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.950536 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.952682 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.953320 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.956232 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.956412 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.956522 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.956581 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.956644 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.956769 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.956809 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.956888 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.957021 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.957165 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.957320 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.957423 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.956232 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.956771 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.957643 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.957576 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.957739 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5lz5f"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.957878 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958045 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958213 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958353 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958424 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958462 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958477 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958582 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958694 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958716 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958865 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.958882 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959000 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959128 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959167 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959176 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/462e6b9c-5e51-439d-aee8-9e7651b8c35a-serving-cert\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959204 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d02c79-2b95-4c7a-ae75-f366d40fe558-config\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959226 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-config\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959247 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-config\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959263 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d02c79-2b95-4c7a-ae75-f366d40fe558-serving-cert\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959282 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-image-import-ca\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959317 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-client-ca\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959333 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959337 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/462e6b9c-5e51-439d-aee8-9e7651b8c35a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959514 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-etcd-client\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959586 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h7v5\" (UniqueName: \"kubernetes.io/projected/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-kube-api-access-8h7v5\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959619 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-machine-approver-tls\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959651 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-config\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959681 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89f1d0f3-c220-4668-b822-3b20b64ebfb8-serving-cert\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959711 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-config\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959737 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af251e39-e77d-4cf8-a359-02645dc98b38-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kjwlb\" (UID: \"af251e39-e77d-4cf8-a359-02645dc98b38\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959765 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5vpz\" (UniqueName: \"kubernetes.io/projected/7e9dd710-d0ec-443f-a081-b18c4b6abe36-kube-api-access-q5vpz\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959795 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959805 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959894 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/462e6b9c-5e51-439d-aee8-9e7651b8c35a-etcd-client\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959943 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-etcd-serving-ca\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959969 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-images\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.959992 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-audit-dir\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960013 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-297z9\" (UniqueName: \"kubernetes.io/projected/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-kube-api-access-297z9\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960035 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e9dd710-d0ec-443f-a081-b18c4b6abe36-serving-cert\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960055 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgg97\" (UniqueName: \"kubernetes.io/projected/89f1d0f3-c220-4668-b822-3b20b64ebfb8-kube-api-access-fgg97\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960117 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960147 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-client-ca\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960171 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-node-pullsecrets\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960192 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/462e6b9c-5e51-439d-aee8-9e7651b8c35a-audit-dir\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960216 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/462e6b9c-5e51-439d-aee8-9e7651b8c35a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960250 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spjjm\" (UniqueName: \"kubernetes.io/projected/462e6b9c-5e51-439d-aee8-9e7651b8c35a-kube-api-access-spjjm\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960285 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d02c79-2b95-4c7a-ae75-f366d40fe558-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960333 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxdl2\" (UniqueName: \"kubernetes.io/projected/af251e39-e77d-4cf8-a359-02645dc98b38-kube-api-access-cxdl2\") pod \"openshift-apiserver-operator-796bbdcf4f-kjwlb\" (UID: \"af251e39-e77d-4cf8-a359-02645dc98b38\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960357 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d02c79-2b95-4c7a-ae75-f366d40fe558-service-ca-bundle\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960383 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960407 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hctmj\" (UniqueName: \"kubernetes.io/projected/e4d02c79-2b95-4c7a-ae75-f366d40fe558-kube-api-access-hctmj\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960442 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-audit\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960464 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/462e6b9c-5e51-439d-aee8-9e7651b8c35a-audit-policies\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960485 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-serving-cert\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960504 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/462e6b9c-5e51-439d-aee8-9e7651b8c35a-encryption-config\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960527 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-auth-proxy-config\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960552 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-config\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960575 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-encryption-config\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960609 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjbsx\" (UniqueName: \"kubernetes.io/projected/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-kube-api-access-qjbsx\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.960629 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af251e39-e77d-4cf8-a359-02645dc98b38-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kjwlb\" (UID: \"af251e39-e77d-4cf8-a359-02645dc98b38\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.966622 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.966865 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.967020 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.969415 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6mmvm"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.970467 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.970491 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.971245 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ljzsp"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.973997 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.975426 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.975438 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.975506 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.975908 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.976643 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.977526 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.981344 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.982093 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.986320 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.986833 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.987347 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-tjxj6"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.989685 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.989854 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tjxj6" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.990367 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-twxgh"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.991035 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.991630 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.992136 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.993408 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-mlxbg"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.993901 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.994256 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.994868 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.997315 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.997865 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng"] Feb 19 05:27:23 crc kubenswrapper[5012]: I0219 05:27:23.998419 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xphkg"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:23.999048 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.000433 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.000521 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.014535 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ntrlp"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.014581 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.015093 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.021490 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.030678 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.031895 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.032177 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.039797 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.040694 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.048941 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.075001 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.075234 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-client-ca\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.075337 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-node-pullsecrets\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.075418 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/462e6b9c-5e51-439d-aee8-9e7651b8c35a-audit-dir\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.075518 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/462e6b9c-5e51-439d-aee8-9e7651b8c35a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.075593 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spjjm\" (UniqueName: \"kubernetes.io/projected/462e6b9c-5e51-439d-aee8-9e7651b8c35a-kube-api-access-spjjm\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.075667 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d02c79-2b95-4c7a-ae75-f366d40fe558-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.075744 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxdl2\" (UniqueName: \"kubernetes.io/projected/af251e39-e77d-4cf8-a359-02645dc98b38-kube-api-access-cxdl2\") pod \"openshift-apiserver-operator-796bbdcf4f-kjwlb\" (UID: \"af251e39-e77d-4cf8-a359-02645dc98b38\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.075878 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d02c79-2b95-4c7a-ae75-f366d40fe558-service-ca-bundle\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.075952 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076020 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hctmj\" (UniqueName: \"kubernetes.io/projected/e4d02c79-2b95-4c7a-ae75-f366d40fe558-kube-api-access-hctmj\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076096 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-audit\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076161 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/462e6b9c-5e51-439d-aee8-9e7651b8c35a-audit-policies\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076230 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-serving-cert\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076318 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/462e6b9c-5e51-439d-aee8-9e7651b8c35a-encryption-config\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076397 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-auth-proxy-config\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076470 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-config\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076533 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-encryption-config\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076604 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjbsx\" (UniqueName: \"kubernetes.io/projected/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-kube-api-access-qjbsx\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076676 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af251e39-e77d-4cf8-a359-02645dc98b38-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kjwlb\" (UID: \"af251e39-e77d-4cf8-a359-02645dc98b38\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076790 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/462e6b9c-5e51-439d-aee8-9e7651b8c35a-serving-cert\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076954 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d02c79-2b95-4c7a-ae75-f366d40fe558-config\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077015 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-config\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077042 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-config\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077066 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d02c79-2b95-4c7a-ae75-f366d40fe558-serving-cert\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077093 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-image-import-ca\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077115 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-client-ca\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077140 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/462e6b9c-5e51-439d-aee8-9e7651b8c35a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077163 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-etcd-client\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077212 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h7v5\" (UniqueName: \"kubernetes.io/projected/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-kube-api-access-8h7v5\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077237 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-machine-approver-tls\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077257 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-config\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077278 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89f1d0f3-c220-4668-b822-3b20b64ebfb8-serving-cert\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077319 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-config\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077345 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af251e39-e77d-4cf8-a359-02645dc98b38-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kjwlb\" (UID: \"af251e39-e77d-4cf8-a359-02645dc98b38\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077370 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5vpz\" (UniqueName: \"kubernetes.io/projected/7e9dd710-d0ec-443f-a081-b18c4b6abe36-kube-api-access-q5vpz\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077402 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077426 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/462e6b9c-5e51-439d-aee8-9e7651b8c35a-etcd-client\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077454 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-etcd-serving-ca\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077475 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-images\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077498 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-audit-dir\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077522 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-297z9\" (UniqueName: \"kubernetes.io/projected/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-kube-api-access-297z9\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077549 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e9dd710-d0ec-443f-a081-b18c4b6abe36-serving-cert\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077572 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgg97\" (UniqueName: \"kubernetes.io/projected/89f1d0f3-c220-4668-b822-3b20b64ebfb8-kube-api-access-fgg97\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077644 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d02c79-2b95-4c7a-ae75-f366d40fe558-service-ca-bundle\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077662 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-client-ca\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077718 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-node-pullsecrets\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077750 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/462e6b9c-5e51-439d-aee8-9e7651b8c35a-audit-dir\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077750 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.078447 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.078533 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d02c79-2b95-4c7a-ae75-f366d40fe558-config\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.078716 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/462e6b9c-5e51-439d-aee8-9e7651b8c35a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.079163 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-audit-dir\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.079413 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-config\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.079939 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-images\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.080417 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.080617 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-client-ca\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.080819 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.081164 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.082487 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-etcd-serving-ca\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.083106 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-auth-proxy-config\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.083248 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/462e6b9c-5e51-439d-aee8-9e7651b8c35a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.083564 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.083728 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-audit\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.080292 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-config\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.084485 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-config\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076410 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.087829 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-config\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.089147 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/462e6b9c-5e51-439d-aee8-9e7651b8c35a-audit-policies\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.090407 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af251e39-e77d-4cf8-a359-02645dc98b38-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kjwlb\" (UID: \"af251e39-e77d-4cf8-a359-02645dc98b38\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.091582 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.091710 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.091706 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-config\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.091889 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.091995 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.092100 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.094110 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-machine-approver-tls\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.094400 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076435 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076487 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076686 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.076816 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.077992 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.095986 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-image-import-ca\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.078673 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.078850 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.078924 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.079027 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.079155 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.103944 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.104123 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.104205 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.104351 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.104465 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.105267 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/462e6b9c-5e51-439d-aee8-9e7651b8c35a-encryption-config\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.105446 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.105539 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.105663 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.105763 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.105850 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.105954 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.106037 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.105499 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.106183 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.105666 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.106284 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.106344 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.106457 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.106558 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.107397 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xfb4j"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.109434 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af251e39-e77d-4cf8-a359-02645dc98b38-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kjwlb\" (UID: \"af251e39-e77d-4cf8-a359-02645dc98b38\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.109912 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89f1d0f3-c220-4668-b822-3b20b64ebfb8-serving-cert\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.111008 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.111535 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.112004 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.112315 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.112827 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.113604 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/462e6b9c-5e51-439d-aee8-9e7651b8c35a-serving-cert\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.113636 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.113656 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.113935 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e9dd710-d0ec-443f-a081-b18c4b6abe36-serving-cert\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.114431 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.114437 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.114455 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-serving-cert\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.115287 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.115375 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.115576 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.116822 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.117173 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.117430 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.119164 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d02c79-2b95-4c7a-ae75-f366d40fe558-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.129998 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4d02c79-2b95-4c7a-ae75-f366d40fe558-serving-cert\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.130109 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwd8z"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.130900 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.131555 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.131804 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/462e6b9c-5e51-439d-aee8-9e7651b8c35a-etcd-client\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.132048 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-etcd-client\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.132371 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.134955 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.136117 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.137280 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-encryption-config\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.137289 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.137712 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.140901 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gv2pd"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.149832 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.152382 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.152618 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-7q9qf"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.154089 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7q9qf" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.156694 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.157499 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.157774 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gwx52"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.158738 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.159487 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-thnmn"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.161190 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.167823 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.168915 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-6qvzq"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.169847 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.170857 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.171503 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.172454 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.172888 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ljzsp"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.173925 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tjxj6"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.174991 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6mmvm"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.175990 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.176958 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.177974 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.178749 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-trusted-ca-bundle\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.178843 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/726872cb-1000-4656-beea-2bd59752199c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tv8j7\" (UID: \"726872cb-1000-4656-beea-2bd59752199c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.178866 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1387b34e-3233-49a1-9e37-ef1e7f4fb660-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ccstp\" (UID: \"1387b34e-3233-49a1-9e37-ef1e7f4fb660\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.178889 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-oauth-config\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.178908 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/726872cb-1000-4656-beea-2bd59752199c-proxy-tls\") pod \"machine-config-controller-84d6567774-tv8j7\" (UID: \"726872cb-1000-4656-beea-2bd59752199c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.178953 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-serving-cert\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.178990 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78dedde0-cb75-4ee7-8735-e6f071a02b10-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhgng\" (UID: \"78dedde0-cb75-4ee7-8735-e6f071a02b10\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.178991 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5lz5f"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.179103 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzxsb\" (UniqueName: \"kubernetes.io/projected/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-kube-api-access-dzxsb\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.179160 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-service-ca\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.179178 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-config\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.179197 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1387b34e-3233-49a1-9e37-ef1e7f4fb660-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ccstp\" (UID: \"1387b34e-3233-49a1-9e37-ef1e7f4fb660\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.179239 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw9hx\" (UniqueName: \"kubernetes.io/projected/726872cb-1000-4656-beea-2bd59752199c-kube-api-access-dw9hx\") pod \"machine-config-controller-84d6567774-tv8j7\" (UID: \"726872cb-1000-4656-beea-2bd59752199c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.179257 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktsw8\" (UniqueName: \"kubernetes.io/projected/78dedde0-cb75-4ee7-8735-e6f071a02b10-kube-api-access-ktsw8\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhgng\" (UID: \"78dedde0-cb75-4ee7-8735-e6f071a02b10\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.179281 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-oauth-serving-cert\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.179311 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78dedde0-cb75-4ee7-8735-e6f071a02b10-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhgng\" (UID: \"78dedde0-cb75-4ee7-8735-e6f071a02b10\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.179361 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1387b34e-3233-49a1-9e37-ef1e7f4fb660-config\") pod \"kube-apiserver-operator-766d6c64bb-ccstp\" (UID: \"1387b34e-3233-49a1-9e37-ef1e7f4fb660\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.180049 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.181389 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.182165 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.183167 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-x2l69"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.183992 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.184197 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n8t75"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.185547 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.185672 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.186241 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.186328 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.188123 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.188164 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-twxgh"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.190069 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gwx52"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.190212 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.192082 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.193480 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.195119 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-mlxbg"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.196640 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.197732 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n8t75"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.198950 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xfb4j"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.200050 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.202031 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.203207 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.204334 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.205467 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7q9qf"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.206574 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.206896 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwd8z"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.207893 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hjmb9"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.209032 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-x2l69"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.209949 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gv2pd"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.210848 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-znn5k"] Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.211407 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.227123 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.246645 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.277618 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.279937 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzxsb\" (UniqueName: \"kubernetes.io/projected/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-kube-api-access-dzxsb\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280018 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-service-ca\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280057 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-config\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280091 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1387b34e-3233-49a1-9e37-ef1e7f4fb660-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ccstp\" (UID: \"1387b34e-3233-49a1-9e37-ef1e7f4fb660\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280126 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw9hx\" (UniqueName: \"kubernetes.io/projected/726872cb-1000-4656-beea-2bd59752199c-kube-api-access-dw9hx\") pod \"machine-config-controller-84d6567774-tv8j7\" (UID: \"726872cb-1000-4656-beea-2bd59752199c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280155 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktsw8\" (UniqueName: \"kubernetes.io/projected/78dedde0-cb75-4ee7-8735-e6f071a02b10-kube-api-access-ktsw8\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhgng\" (UID: \"78dedde0-cb75-4ee7-8735-e6f071a02b10\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280191 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-oauth-serving-cert\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280216 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78dedde0-cb75-4ee7-8735-e6f071a02b10-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhgng\" (UID: \"78dedde0-cb75-4ee7-8735-e6f071a02b10\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280245 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1387b34e-3233-49a1-9e37-ef1e7f4fb660-config\") pod \"kube-apiserver-operator-766d6c64bb-ccstp\" (UID: \"1387b34e-3233-49a1-9e37-ef1e7f4fb660\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280283 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-trusted-ca-bundle\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280342 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-oauth-config\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280367 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/726872cb-1000-4656-beea-2bd59752199c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tv8j7\" (UID: \"726872cb-1000-4656-beea-2bd59752199c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280393 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1387b34e-3233-49a1-9e37-ef1e7f4fb660-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ccstp\" (UID: \"1387b34e-3233-49a1-9e37-ef1e7f4fb660\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280426 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/726872cb-1000-4656-beea-2bd59752199c-proxy-tls\") pod \"machine-config-controller-84d6567774-tv8j7\" (UID: \"726872cb-1000-4656-beea-2bd59752199c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280453 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-serving-cert\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280488 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78dedde0-cb75-4ee7-8735-e6f071a02b10-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhgng\" (UID: \"78dedde0-cb75-4ee7-8735-e6f071a02b10\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.280935 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-service-ca\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.281550 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/726872cb-1000-4656-beea-2bd59752199c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tv8j7\" (UID: \"726872cb-1000-4656-beea-2bd59752199c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.281560 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-trusted-ca-bundle\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.281693 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1387b34e-3233-49a1-9e37-ef1e7f4fb660-config\") pod \"kube-apiserver-operator-766d6c64bb-ccstp\" (UID: \"1387b34e-3233-49a1-9e37-ef1e7f4fb660\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.284257 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-oauth-config\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.284593 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1387b34e-3233-49a1-9e37-ef1e7f4fb660-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ccstp\" (UID: \"1387b34e-3233-49a1-9e37-ef1e7f4fb660\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.287126 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-serving-cert\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.287313 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.291347 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-oauth-serving-cert\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.307358 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.311426 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-config\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.327101 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.347004 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.368155 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.387096 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.406859 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.431450 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.448490 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.455724 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/726872cb-1000-4656-beea-2bd59752199c-proxy-tls\") pod \"machine-config-controller-84d6567774-tv8j7\" (UID: \"726872cb-1000-4656-beea-2bd59752199c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.467714 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.487508 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.508733 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.527613 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.547489 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.568136 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.589471 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.610051 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.616767 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78dedde0-cb75-4ee7-8735-e6f071a02b10-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhgng\" (UID: \"78dedde0-cb75-4ee7-8735-e6f071a02b10\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.627943 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.631778 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78dedde0-cb75-4ee7-8735-e6f071a02b10-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhgng\" (UID: \"78dedde0-cb75-4ee7-8735-e6f071a02b10\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.647861 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.668252 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.727219 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.729384 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.730382 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.747980 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.768403 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.787894 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.808015 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.829514 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.847262 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.868742 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.887923 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.908562 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.928732 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.947585 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 05:27:24 crc kubenswrapper[5012]: I0219 05:27:24.967377 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.035673 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spjjm\" (UniqueName: \"kubernetes.io/projected/462e6b9c-5e51-439d-aee8-9e7651b8c35a-kube-api-access-spjjm\") pod \"apiserver-7bbb656c7d-87qqk\" (UID: \"462e6b9c-5e51-439d-aee8-9e7651b8c35a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.053618 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hctmj\" (UniqueName: \"kubernetes.io/projected/e4d02c79-2b95-4c7a-ae75-f366d40fe558-kube-api-access-hctmj\") pod \"authentication-operator-69f744f599-thnmn\" (UID: \"e4d02c79-2b95-4c7a-ae75-f366d40fe558\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.073583 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxdl2\" (UniqueName: \"kubernetes.io/projected/af251e39-e77d-4cf8-a359-02645dc98b38-kube-api-access-cxdl2\") pod \"openshift-apiserver-operator-796bbdcf4f-kjwlb\" (UID: \"af251e39-e77d-4cf8-a359-02645dc98b38\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.086144 5012 request.go:700] Waited for 1.00699013s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.094143 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgg97\" (UniqueName: \"kubernetes.io/projected/89f1d0f3-c220-4668-b822-3b20b64ebfb8-kube-api-access-fgg97\") pod \"route-controller-manager-6576b87f9c-mn4f2\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.114153 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5vpz\" (UniqueName: \"kubernetes.io/projected/7e9dd710-d0ec-443f-a081-b18c4b6abe36-kube-api-access-q5vpz\") pod \"controller-manager-879f6c89f-ntrlp\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.124687 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.134418 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-297z9\" (UniqueName: \"kubernetes.io/projected/4888722d-d5dd-4748-ac7b-a1d11ba08e6e-kube-api-access-297z9\") pod \"apiserver-76f77b778f-hjmb9\" (UID: \"4888722d-d5dd-4748-ac7b-a1d11ba08e6e\") " pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.148084 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.158857 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h7v5\" (UniqueName: \"kubernetes.io/projected/5c537eae-5a27-4a4d-ba9e-0fd7efe72f37-kube-api-access-8h7v5\") pod \"machine-api-operator-5694c8668f-6qvzq\" (UID: \"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.162827 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.168088 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.178042 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.188343 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.197962 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.215937 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.240486 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjbsx\" (UniqueName: \"kubernetes.io/projected/5bb4ce13-477c-4c8d-89b5-0d6cc099095c-kube-api-access-qjbsx\") pod \"machine-approver-56656f9798-tnq42\" (UID: \"5bb4ce13-477c-4c8d-89b5-0d6cc099095c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.248144 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.267925 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.284094 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.287345 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.289123 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.309295 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.327551 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.348751 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.368250 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.389459 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.398579 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.409717 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.423835 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-thnmn"] Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.427141 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.449511 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.467908 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.487252 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.507394 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.531731 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.547538 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.567926 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.587124 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.596355 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" event={"ID":"5bb4ce13-477c-4c8d-89b5-0d6cc099095c","Type":"ContainerStarted","Data":"6eab049c47c31bf2d41a3ab4fe756097cb325ba1a85dbf667dc4e2937f242d63"} Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.596418 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" event={"ID":"5bb4ce13-477c-4c8d-89b5-0d6cc099095c","Type":"ContainerStarted","Data":"e9fe810e37e6df787ab52ad04ca8ce08181039e71290e7fdd7f3db3f700cdffb"} Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.601116 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" event={"ID":"e4d02c79-2b95-4c7a-ae75-f366d40fe558","Type":"ContainerStarted","Data":"79fd3c0788386dd9f0b11a8a5afdce9e82fdf95c32bcc886092dc5eec7a00a0d"} Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.601162 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" event={"ID":"e4d02c79-2b95-4c7a-ae75-f366d40fe558","Type":"ContainerStarted","Data":"95e2c396f0921deeceeb6d73d792abd9827b8b1dc239c2d40419276526654e59"} Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.608111 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.610531 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hjmb9"] Feb 19 05:27:25 crc kubenswrapper[5012]: W0219 05:27:25.619006 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4888722d_d5dd_4748_ac7b_a1d11ba08e6e.slice/crio-dceabac4fd3c41899884d1330da27bd5b20c6eac03e5235cf07da17192b3dc26 WatchSource:0}: Error finding container dceabac4fd3c41899884d1330da27bd5b20c6eac03e5235cf07da17192b3dc26: Status 404 returned error can't find the container with id dceabac4fd3c41899884d1330da27bd5b20c6eac03e5235cf07da17192b3dc26 Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.627760 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.647555 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.651669 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-6qvzq"] Feb 19 05:27:25 crc kubenswrapper[5012]: W0219 05:27:25.661338 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c537eae_5a27_4a4d_ba9e_0fd7efe72f37.slice/crio-0d7741dd8935ca80837ae4f1d3e7c159a96896d5f1a49cea2f52d5089d348d51 WatchSource:0}: Error finding container 0d7741dd8935ca80837ae4f1d3e7c159a96896d5f1a49cea2f52d5089d348d51: Status 404 returned error can't find the container with id 0d7741dd8935ca80837ae4f1d3e7c159a96896d5f1a49cea2f52d5089d348d51 Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.667004 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.688008 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.693677 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ntrlp"] Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.695221 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk"] Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.708064 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 05:27:25 crc kubenswrapper[5012]: W0219 05:27:25.708096 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod462e6b9c_5e51_439d_aee8_9e7651b8c35a.slice/crio-0a9f48a203a0a3a4a70e04f238a6e34e7595f66b91ff57dc3097a43f3ff6ddfa WatchSource:0}: Error finding container 0a9f48a203a0a3a4a70e04f238a6e34e7595f66b91ff57dc3097a43f3ff6ddfa: Status 404 returned error can't find the container with id 0a9f48a203a0a3a4a70e04f238a6e34e7595f66b91ff57dc3097a43f3ff6ddfa Feb 19 05:27:25 crc kubenswrapper[5012]: W0219 05:27:25.710562 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e9dd710_d0ec_443f_a081_b18c4b6abe36.slice/crio-1ee3dd9b34ee54e0754750a439b4590af9a0a688e92512f756cbea34daf382ca WatchSource:0}: Error finding container 1ee3dd9b34ee54e0754750a439b4590af9a0a688e92512f756cbea34daf382ca: Status 404 returned error can't find the container with id 1ee3dd9b34ee54e0754750a439b4590af9a0a688e92512f756cbea34daf382ca Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.733211 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.747823 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.754169 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2"] Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.758248 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb"] Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.768421 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.788869 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.807652 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: W0219 05:27:25.826269 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89f1d0f3_c220_4668_b822_3b20b64ebfb8.slice/crio-5003562696efaf86d8b690a85cdcf58c161a34b94a16cc2ce64a20964ec94127 WatchSource:0}: Error finding container 5003562696efaf86d8b690a85cdcf58c161a34b94a16cc2ce64a20964ec94127: Status 404 returned error can't find the container with id 5003562696efaf86d8b690a85cdcf58c161a34b94a16cc2ce64a20964ec94127 Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.827224 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 05:27:25 crc kubenswrapper[5012]: W0219 05:27:25.828242 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf251e39_e77d_4cf8_a359_02645dc98b38.slice/crio-16428c123ec8b757275908431734a8ff065b22d9c78cd4eb6cac9268a1b80501 WatchSource:0}: Error finding container 16428c123ec8b757275908431734a8ff065b22d9c78cd4eb6cac9268a1b80501: Status 404 returned error can't find the container with id 16428c123ec8b757275908431734a8ff065b22d9c78cd4eb6cac9268a1b80501 Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.847424 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.868209 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.887085 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.907712 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.928068 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.948786 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.968181 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 05:27:25 crc kubenswrapper[5012]: I0219 05:27:25.988472 5012 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.008489 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.027910 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.048943 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.068054 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.087741 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.105723 5012 request.go:700] Waited for 1.825597554s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/console/token Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.151574 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzxsb\" (UniqueName: \"kubernetes.io/projected/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-kube-api-access-dzxsb\") pod \"console-f9d7485db-mlxbg\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.159321 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1387b34e-3233-49a1-9e37-ef1e7f4fb660-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ccstp\" (UID: \"1387b34e-3233-49a1-9e37-ef1e7f4fb660\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.169528 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktsw8\" (UniqueName: \"kubernetes.io/projected/78dedde0-cb75-4ee7-8735-e6f071a02b10-kube-api-access-ktsw8\") pod \"kube-storage-version-migrator-operator-b67b599dd-dhgng\" (UID: \"78dedde0-cb75-4ee7-8735-e6f071a02b10\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.206861 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-trusted-ca\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.208471 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.208524 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20a18862-6cbd-4fb1-9d69-ae768e0afddd-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmswj\" (UID: \"20a18862-6cbd-4fb1-9d69-ae768e0afddd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.208643 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-975f8\" (UniqueName: \"kubernetes.io/projected/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-kube-api-access-975f8\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.208682 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6383e6d2-7e9e-4927-a55a-f574e48d316d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xg4d5\" (UID: \"6383e6d2-7e9e-4927-a55a-f574e48d316d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.208793 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crvb2\" (UniqueName: \"kubernetes.io/projected/af89e320-2661-4860-8079-0c1ff810d97a-kube-api-access-crvb2\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.208836 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/af89e320-2661-4860-8079-0c1ff810d97a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.208893 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ff5220b-0304-48dc-b2eb-e2bd2a2c8205-config\") pod \"service-ca-operator-777779d784-gwtrd\" (UID: \"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.208954 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-default-certificate\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.208998 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2ef24f0-0d7d-4d25-a839-b650893a8332-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209036 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20a18862-6cbd-4fb1-9d69-ae768e0afddd-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmswj\" (UID: \"20a18862-6cbd-4fb1-9d69-ae768e0afddd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209078 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d69b3-8e9f-4413-93c1-3c1f77388221-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mt2l6\" (UID: \"e69d69b3-8e9f-4413-93c1-3c1f77388221\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209118 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzhhf\" (UniqueName: \"kubernetes.io/projected/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-kube-api-access-lzhhf\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209193 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209235 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btf46\" (UniqueName: \"kubernetes.io/projected/c2ef24f0-0d7d-4d25-a839-b650893a8332-kube-api-access-btf46\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209264 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-webhook-cert\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209287 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af89e320-2661-4860-8079-0c1ff810d97a-metrics-tls\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209354 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-trusted-ca\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209378 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-dir\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209401 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209428 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209450 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/746299dc-637f-42a3-ad0d-0de202bae64e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j8fg8\" (UID: \"746299dc-637f-42a3-ad0d-0de202bae64e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209465 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw9hx\" (UniqueName: \"kubernetes.io/projected/726872cb-1000-4656-beea-2bd59752199c-kube-api-access-dw9hx\") pod \"machine-config-controller-84d6567774-tv8j7\" (UID: \"726872cb-1000-4656-beea-2bd59752199c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209506 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6383e6d2-7e9e-4927-a55a-f574e48d316d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xg4d5\" (UID: \"6383e6d2-7e9e-4927-a55a-f574e48d316d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209552 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209574 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-serving-cert\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209600 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-bound-sa-token\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209624 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209656 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d9907b5-e862-4242-b233-ed39e5de515a-serving-cert\") pod \"openshift-config-operator-7777fb866f-9kvdd\" (UID: \"9d9907b5-e862-4242-b233-ed39e5de515a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209733 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ff5220b-0304-48dc-b2eb-e2bd2a2c8205-serving-cert\") pod \"service-ca-operator-777779d784-gwtrd\" (UID: \"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209776 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-certificates\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209798 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-service-ca-bundle\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209822 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2ef24f0-0d7d-4d25-a839-b650893a8332-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209847 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/70e7a5c6-0abf-4c78-8087-958a19264b49-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.209887 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:26.709874512 +0000 UTC m=+142.743197081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209915 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/70e7a5c6-0abf-4c78-8087-958a19264b49-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209934 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmlnj\" (UniqueName: \"kubernetes.io/projected/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-kube-api-access-pmlnj\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209974 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gdlm\" (UniqueName: \"kubernetes.io/projected/a3d6e827-2fd3-4026-8bbb-b6336cf7c020-kube-api-access-2gdlm\") pod \"dns-operator-744455d44c-5lz5f\" (UID: \"a3d6e827-2fd3-4026-8bbb-b6336cf7c020\") " pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.209994 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/053058a2-c542-41f4-b393-1be45501cfa9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v57rh\" (UID: \"053058a2-c542-41f4-b393-1be45501cfa9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210025 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210046 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210073 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a3d6e827-2fd3-4026-8bbb-b6336cf7c020-metrics-tls\") pod \"dns-operator-744455d44c-5lz5f\" (UID: \"a3d6e827-2fd3-4026-8bbb-b6336cf7c020\") " pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210089 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-apiservice-cert\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210104 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jq8v\" (UniqueName: \"kubernetes.io/projected/6ff5220b-0304-48dc-b2eb-e2bd2a2c8205-kube-api-access-6jq8v\") pod \"service-ca-operator-777779d784-gwtrd\" (UID: \"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210135 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/053058a2-c542-41f4-b393-1be45501cfa9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v57rh\" (UID: \"053058a2-c542-41f4-b393-1be45501cfa9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210161 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20a18862-6cbd-4fb1-9d69-ae768e0afddd-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmswj\" (UID: \"20a18862-6cbd-4fb1-9d69-ae768e0afddd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210220 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210239 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-tls\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210262 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-tmpfs\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210286 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6383e6d2-7e9e-4927-a55a-f574e48d316d-config\") pod \"kube-controller-manager-operator-78b949d7b-xg4d5\" (UID: \"6383e6d2-7e9e-4927-a55a-f574e48d316d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210319 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-metrics-certs\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210338 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210399 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw2l6\" (UniqueName: \"kubernetes.io/projected/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-kube-api-access-lw2l6\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210422 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdsc5\" (UniqueName: \"kubernetes.io/projected/746299dc-637f-42a3-ad0d-0de202bae64e-kube-api-access-cdsc5\") pod \"cluster-samples-operator-665b6dd947-j8fg8\" (UID: \"746299dc-637f-42a3-ad0d-0de202bae64e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.210442 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-policies\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.211278 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg8nf\" (UniqueName: \"kubernetes.io/projected/e69d69b3-8e9f-4413-93c1-3c1f77388221-kube-api-access-tg8nf\") pod \"package-server-manager-789f6589d5-mt2l6\" (UID: \"e69d69b3-8e9f-4413-93c1-3c1f77388221\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.211461 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2ef24f0-0d7d-4d25-a839-b650893a8332-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.212492 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-stats-auth\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.212549 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmhxd\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-kube-api-access-pmhxd\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.212709 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.212984 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx6d8\" (UniqueName: \"kubernetes.io/projected/c4edd2db-a884-46ac-9a12-0cd2a5daaeb5-kube-api-access-dx6d8\") pod \"downloads-7954f5f757-tjxj6\" (UID: \"c4edd2db-a884-46ac-9a12-0cd2a5daaeb5\") " pod="openshift-console/downloads-7954f5f757-tjxj6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.213032 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.213171 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/af89e320-2661-4860-8079-0c1ff810d97a-trusted-ca\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.213205 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9d9907b5-e862-4242-b233-ed39e5de515a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9kvdd\" (UID: \"9d9907b5-e862-4242-b233-ed39e5de515a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.213266 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft4kq\" (UniqueName: \"kubernetes.io/projected/9d9907b5-e862-4242-b233-ed39e5de515a-kube-api-access-ft4kq\") pod \"openshift-config-operator-7777fb866f-9kvdd\" (UID: \"9d9907b5-e862-4242-b233-ed39e5de515a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.213467 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-config\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.213515 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvh75\" (UniqueName: \"kubernetes.io/projected/053058a2-c542-41f4-b393-1be45501cfa9-kube-api-access-rvh75\") pod \"openshift-controller-manager-operator-756b6f6bc6-v57rh\" (UID: \"053058a2-c542-41f4-b393-1be45501cfa9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.252698 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315068 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.315575 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:26.815521315 +0000 UTC m=+142.848843904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315608 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6383e6d2-7e9e-4927-a55a-f574e48d316d-config\") pod \"kube-controller-manager-operator-78b949d7b-xg4d5\" (UID: \"6383e6d2-7e9e-4927-a55a-f574e48d316d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315645 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5kdx\" (UniqueName: \"kubernetes.io/projected/8a05e6ff-179f-4a04-9fc2-524e31980467-kube-api-access-c5kdx\") pod \"dns-default-x2l69\" (UID: \"8a05e6ff-179f-4a04-9fc2-524e31980467\") " pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315670 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46582f7f-c6b0-4ae3-9103-4a4754304438-secret-volume\") pod \"collect-profiles-29524635-psnb6\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315695 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4f848507-d616-4d06-885f-d84210d9b4a0-etcd-service-ca\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315718 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qltv6\" (UniqueName: \"kubernetes.io/projected/bdad60bd-8af5-439a-a62e-edf676281c47-kube-api-access-qltv6\") pod \"service-ca-9c57cc56f-gv2pd\" (UID: \"bdad60bd-8af5-439a-a62e-edf676281c47\") " pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315742 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-metrics-certs\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315766 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315795 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-registration-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315822 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bdad60bd-8af5-439a-a62e-edf676281c47-signing-cabundle\") pod \"service-ca-9c57cc56f-gv2pd\" (UID: \"bdad60bd-8af5-439a-a62e-edf676281c47\") " pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315873 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw2l6\" (UniqueName: \"kubernetes.io/projected/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-kube-api-access-lw2l6\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315896 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g82wz\" (UniqueName: \"kubernetes.io/projected/f5a5c8b4-57c3-43fc-a404-2754d0e70c50-kube-api-access-g82wz\") pod \"ingress-canary-7q9qf\" (UID: \"f5a5c8b4-57c3-43fc-a404-2754d0e70c50\") " pod="openshift-ingress-canary/ingress-canary-7q9qf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315939 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdsc5\" (UniqueName: \"kubernetes.io/projected/746299dc-637f-42a3-ad0d-0de202bae64e-kube-api-access-cdsc5\") pod \"cluster-samples-operator-665b6dd947-j8fg8\" (UID: \"746299dc-637f-42a3-ad0d-0de202bae64e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315960 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-policies\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.315985 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg8nf\" (UniqueName: \"kubernetes.io/projected/e69d69b3-8e9f-4413-93c1-3c1f77388221-kube-api-access-tg8nf\") pod \"package-server-manager-789f6589d5-mt2l6\" (UID: \"e69d69b3-8e9f-4413-93c1-3c1f77388221\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316032 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2ef24f0-0d7d-4d25-a839-b650893a8332-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316058 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a05e6ff-179f-4a04-9fc2-524e31980467-config-volume\") pod \"dns-default-x2l69\" (UID: \"8a05e6ff-179f-4a04-9fc2-524e31980467\") " pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316082 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-stats-auth\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316106 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2876c7dd-5979-49eb-ab61-8ffce07376b2-srv-cert\") pod \"olm-operator-6b444d44fb-jv9qx\" (UID: \"2876c7dd-5979-49eb-ab61-8ffce07376b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316130 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmhxd\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-kube-api-access-pmhxd\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316158 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316187 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/73661022-0008-4452-b140-f0a75e4c40c7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316210 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx6d8\" (UniqueName: \"kubernetes.io/projected/c4edd2db-a884-46ac-9a12-0cd2a5daaeb5-kube-api-access-dx6d8\") pod \"downloads-7954f5f757-tjxj6\" (UID: \"c4edd2db-a884-46ac-9a12-0cd2a5daaeb5\") " pod="openshift-console/downloads-7954f5f757-tjxj6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316236 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316288 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fdhm\" (UniqueName: \"kubernetes.io/projected/4087f246-2160-469e-8ad1-d88c147ff7c0-kube-api-access-9fdhm\") pod \"multus-admission-controller-857f4d67dd-xfb4j\" (UID: \"4087f246-2160-469e-8ad1-d88c147ff7c0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316355 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/056df788-349b-4549-88ab-66bbc2ff6afb-node-bootstrap-token\") pod \"machine-config-server-znn5k\" (UID: \"056df788-349b-4549-88ab-66bbc2ff6afb\") " pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316378 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f848507-d616-4d06-885f-d84210d9b4a0-serving-cert\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316411 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/af89e320-2661-4860-8079-0c1ff810d97a-trusted-ca\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316448 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1fe71123-0d33-41fa-b582-02d70177d0f0-srv-cert\") pod \"catalog-operator-68c6474976-x52wm\" (UID: \"1fe71123-0d33-41fa-b582-02d70177d0f0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316479 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9d9907b5-e862-4242-b233-ed39e5de515a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9kvdd\" (UID: \"9d9907b5-e862-4242-b233-ed39e5de515a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316506 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft4kq\" (UniqueName: \"kubernetes.io/projected/9d9907b5-e862-4242-b233-ed39e5de515a-kube-api-access-ft4kq\") pod \"openshift-config-operator-7777fb866f-9kvdd\" (UID: \"9d9907b5-e862-4242-b233-ed39e5de515a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316529 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-config\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316554 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvh75\" (UniqueName: \"kubernetes.io/projected/053058a2-c542-41f4-b393-1be45501cfa9-kube-api-access-rvh75\") pod \"openshift-controller-manager-operator-756b6f6bc6-v57rh\" (UID: \"053058a2-c542-41f4-b393-1be45501cfa9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316583 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx5fm\" (UniqueName: \"kubernetes.io/projected/ab107439-3fd5-41e7-9d30-71962fc96028-kube-api-access-wx5fm\") pod \"migrator-59844c95c7-sppcx\" (UID: \"ab107439-3fd5-41e7-9d30-71962fc96028\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316605 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-plugins-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316627 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4087f246-2160-469e-8ad1-d88c147ff7c0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xfb4j\" (UID: \"4087f246-2160-469e-8ad1-d88c147ff7c0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316652 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-trusted-ca\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316674 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316697 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-975f8\" (UniqueName: \"kubernetes.io/projected/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-kube-api-access-975f8\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316722 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6383e6d2-7e9e-4927-a55a-f574e48d316d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xg4d5\" (UID: \"6383e6d2-7e9e-4927-a55a-f574e48d316d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316744 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20a18862-6cbd-4fb1-9d69-ae768e0afddd-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmswj\" (UID: \"20a18862-6cbd-4fb1-9d69-ae768e0afddd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316771 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v76p\" (UniqueName: \"kubernetes.io/projected/562c18aa-5aed-4f1e-95f5-da1fe7c02523-kube-api-access-4v76p\") pod \"marketplace-operator-79b997595-kwd8z\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316794 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crvb2\" (UniqueName: \"kubernetes.io/projected/af89e320-2661-4860-8079-0c1ff810d97a-kube-api-access-crvb2\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316831 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1fe71123-0d33-41fa-b582-02d70177d0f0-profile-collector-cert\") pod \"catalog-operator-68c6474976-x52wm\" (UID: \"1fe71123-0d33-41fa-b582-02d70177d0f0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316864 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/af89e320-2661-4860-8079-0c1ff810d97a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316901 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz9rv\" (UniqueName: \"kubernetes.io/projected/59cc3a77-bf98-42ed-98d8-a921b7039c6f-kube-api-access-cz9rv\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316933 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4f848507-d616-4d06-885f-d84210d9b4a0-etcd-ca\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.316981 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c24k9\" (UniqueName: \"kubernetes.io/projected/46582f7f-c6b0-4ae3-9103-4a4754304438-kube-api-access-c24k9\") pod \"collect-profiles-29524635-psnb6\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317006 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ff5220b-0304-48dc-b2eb-e2bd2a2c8205-config\") pod \"service-ca-operator-777779d784-gwtrd\" (UID: \"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317041 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-default-certificate\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317065 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20a18862-6cbd-4fb1-9d69-ae768e0afddd-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmswj\" (UID: \"20a18862-6cbd-4fb1-9d69-ae768e0afddd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317090 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bn2r\" (UniqueName: \"kubernetes.io/projected/056df788-349b-4549-88ab-66bbc2ff6afb-kube-api-access-6bn2r\") pod \"machine-config-server-znn5k\" (UID: \"056df788-349b-4549-88ab-66bbc2ff6afb\") " pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317127 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2ef24f0-0d7d-4d25-a839-b650893a8332-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317168 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d69b3-8e9f-4413-93c1-3c1f77388221-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mt2l6\" (UID: \"e69d69b3-8e9f-4413-93c1-3c1f77388221\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317194 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzhhf\" (UniqueName: \"kubernetes.io/projected/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-kube-api-access-lzhhf\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317218 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-csi-data-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317243 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwl8b\" (UniqueName: \"kubernetes.io/projected/1fe71123-0d33-41fa-b582-02d70177d0f0-kube-api-access-zwl8b\") pod \"catalog-operator-68c6474976-x52wm\" (UID: \"1fe71123-0d33-41fa-b582-02d70177d0f0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317269 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btf46\" (UniqueName: \"kubernetes.io/projected/c2ef24f0-0d7d-4d25-a839-b650893a8332-kube-api-access-btf46\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317295 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317343 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af89e320-2661-4860-8079-0c1ff810d97a-metrics-tls\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317383 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-webhook-cert\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317418 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-trusted-ca\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317453 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-dir\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317476 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317500 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4f848507-d616-4d06-885f-d84210d9b4a0-etcd-client\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317524 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317551 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-mountpoint-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317573 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/73661022-0008-4452-b140-f0a75e4c40c7-images\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317599 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/746299dc-637f-42a3-ad0d-0de202bae64e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j8fg8\" (UID: \"746299dc-637f-42a3-ad0d-0de202bae64e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317622 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6383e6d2-7e9e-4927-a55a-f574e48d316d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xg4d5\" (UID: \"6383e6d2-7e9e-4927-a55a-f574e48d316d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317648 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f848507-d616-4d06-885f-d84210d9b4a0-config\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317677 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317703 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-serving-cert\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317726 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bdad60bd-8af5-439a-a62e-edf676281c47-signing-key\") pod \"service-ca-9c57cc56f-gv2pd\" (UID: \"bdad60bd-8af5-439a-a62e-edf676281c47\") " pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317753 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcl96\" (UniqueName: \"kubernetes.io/projected/4f848507-d616-4d06-885f-d84210d9b4a0-kube-api-access-xcl96\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317777 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-bound-sa-token\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317801 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317825 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d9907b5-e862-4242-b233-ed39e5de515a-serving-cert\") pod \"openshift-config-operator-7777fb866f-9kvdd\" (UID: \"9d9907b5-e862-4242-b233-ed39e5de515a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317843 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317852 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46582f7f-c6b0-4ae3-9103-4a4754304438-config-volume\") pod \"collect-profiles-29524635-psnb6\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.318058 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-dir\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317647 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-policies\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.317616 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6383e6d2-7e9e-4927-a55a-f574e48d316d-config\") pod \"kube-controller-manager-operator-78b949d7b-xg4d5\" (UID: \"6383e6d2-7e9e-4927-a55a-f574e48d316d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.319182 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-config\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.319189 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:26.819163794 +0000 UTC m=+142.852486373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.320172 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ff5220b-0304-48dc-b2eb-e2bd2a2c8205-serving-cert\") pod \"service-ca-operator-777779d784-gwtrd\" (UID: \"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.320237 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nwdd\" (UniqueName: \"kubernetes.io/projected/9102ddf1-e140-48e7-9ecd-14a4c007f5d5-kube-api-access-5nwdd\") pod \"control-plane-machine-set-operator-78cbb6b69f-mbxqf\" (UID: \"9102ddf1-e140-48e7-9ecd-14a4c007f5d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.320259 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45jfl\" (UniqueName: \"kubernetes.io/projected/73661022-0008-4452-b140-f0a75e4c40c7-kube-api-access-45jfl\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.322524 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.323117 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/9d9907b5-e862-4242-b233-ed39e5de515a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-9kvdd\" (UID: \"9d9907b5-e862-4242-b233-ed39e5de515a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.323617 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d69b3-8e9f-4413-93c1-3c1f77388221-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-mt2l6\" (UID: \"e69d69b3-8e9f-4413-93c1-3c1f77388221\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.323699 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/746299dc-637f-42a3-ad0d-0de202bae64e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j8fg8\" (UID: \"746299dc-637f-42a3-ad0d-0de202bae64e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.323749 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-stats-auth\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.324384 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-serving-cert\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.324560 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ff5220b-0304-48dc-b2eb-e2bd2a2c8205-serving-cert\") pod \"service-ca-operator-777779d784-gwtrd\" (UID: \"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.324651 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-service-ca-bundle\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.324682 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2ef24f0-0d7d-4d25-a839-b650893a8332-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.324724 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-certificates\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.325810 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-service-ca-bundle\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.325859 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/70e7a5c6-0abf-4c78-8087-958a19264b49-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.325909 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/70e7a5c6-0abf-4c78-8087-958a19264b49-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326023 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-certificates\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326037 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmlnj\" (UniqueName: \"kubernetes.io/projected/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-kube-api-access-pmlnj\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326075 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a05e6ff-179f-4a04-9fc2-524e31980467-metrics-tls\") pod \"dns-default-x2l69\" (UID: \"8a05e6ff-179f-4a04-9fc2-524e31980467\") " pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326405 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2ef24f0-0d7d-4d25-a839-b650893a8332-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326565 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20a18862-6cbd-4fb1-9d69-ae768e0afddd-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmswj\" (UID: \"20a18862-6cbd-4fb1-9d69-ae768e0afddd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326585 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/70e7a5c6-0abf-4c78-8087-958a19264b49-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326759 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzf97\" (UniqueName: \"kubernetes.io/projected/2876c7dd-5979-49eb-ab61-8ffce07376b2-kube-api-access-kzf97\") pod \"olm-operator-6b444d44fb-jv9qx\" (UID: \"2876c7dd-5979-49eb-ab61-8ffce07376b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326819 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gdlm\" (UniqueName: \"kubernetes.io/projected/a3d6e827-2fd3-4026-8bbb-b6336cf7c020-kube-api-access-2gdlm\") pod \"dns-operator-744455d44c-5lz5f\" (UID: \"a3d6e827-2fd3-4026-8bbb-b6336cf7c020\") " pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326853 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ff5220b-0304-48dc-b2eb-e2bd2a2c8205-config\") pod \"service-ca-operator-777779d784-gwtrd\" (UID: \"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326865 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/053058a2-c542-41f4-b393-1be45501cfa9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v57rh\" (UID: \"053058a2-c542-41f4-b393-1be45501cfa9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.326979 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.327654 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af89e320-2661-4860-8079-0c1ff810d97a-metrics-tls\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.328223 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.328407 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5a5c8b4-57c3-43fc-a404-2754d0e70c50-cert\") pod \"ingress-canary-7q9qf\" (UID: \"f5a5c8b4-57c3-43fc-a404-2754d0e70c50\") " pod="openshift-ingress-canary/ingress-canary-7q9qf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.328598 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.328787 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a3d6e827-2fd3-4026-8bbb-b6336cf7c020-metrics-tls\") pod \"dns-operator-744455d44c-5lz5f\" (UID: \"a3d6e827-2fd3-4026-8bbb-b6336cf7c020\") " pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.328954 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/053058a2-c542-41f4-b393-1be45501cfa9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v57rh\" (UID: \"053058a2-c542-41f4-b393-1be45501cfa9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.329153 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20a18862-6cbd-4fb1-9d69-ae768e0afddd-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmswj\" (UID: \"20a18862-6cbd-4fb1-9d69-ae768e0afddd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.329232 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/056df788-349b-4549-88ab-66bbc2ff6afb-certs\") pod \"machine-config-server-znn5k\" (UID: \"056df788-349b-4549-88ab-66bbc2ff6afb\") " pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.330078 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-apiservice-cert\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.330279 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jq8v\" (UniqueName: \"kubernetes.io/projected/6ff5220b-0304-48dc-b2eb-e2bd2a2c8205-kube-api-access-6jq8v\") pod \"service-ca-operator-777779d784-gwtrd\" (UID: \"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.330608 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/73661022-0008-4452-b140-f0a75e4c40c7-proxy-tls\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.330720 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kwd8z\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.330748 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-socket-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.330965 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.331067 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kwd8z\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.331210 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-tls\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.331252 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2876c7dd-5979-49eb-ab61-8ffce07376b2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jv9qx\" (UID: \"2876c7dd-5979-49eb-ab61-8ffce07376b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.331765 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-tmpfs\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.331833 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9102ddf1-e140-48e7-9ecd-14a4c007f5d5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mbxqf\" (UID: \"9102ddf1-e140-48e7-9ecd-14a4c007f5d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.331942 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d9907b5-e862-4242-b233-ed39e5de515a-serving-cert\") pod \"openshift-config-operator-7777fb866f-9kvdd\" (UID: \"9d9907b5-e862-4242-b233-ed39e5de515a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.332505 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-trusted-ca\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.332995 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c2ef24f0-0d7d-4d25-a839-b650893a8332-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.331507 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-trusted-ca\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.334604 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.337034 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/70e7a5c6-0abf-4c78-8087-958a19264b49-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.337534 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/af89e320-2661-4860-8079-0c1ff810d97a-trusted-ca\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.338039 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.338168 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-tls\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.338889 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-default-certificate\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.338999 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6383e6d2-7e9e-4927-a55a-f574e48d316d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xg4d5\" (UID: \"6383e6d2-7e9e-4927-a55a-f574e48d316d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.339450 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-metrics-certs\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.339791 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.340339 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.341585 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.341649 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/053058a2-c542-41f4-b393-1be45501cfa9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v57rh\" (UID: \"053058a2-c542-41f4-b393-1be45501cfa9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.341657 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.342524 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/053058a2-c542-41f4-b393-1be45501cfa9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v57rh\" (UID: \"053058a2-c542-41f4-b393-1be45501cfa9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.343672 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.343813 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-tmpfs\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.343887 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-webhook-cert\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.345141 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.345220 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.346792 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a3d6e827-2fd3-4026-8bbb-b6336cf7c020-metrics-tls\") pod \"dns-operator-744455d44c-5lz5f\" (UID: \"a3d6e827-2fd3-4026-8bbb-b6336cf7c020\") " pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.347422 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-apiservice-cert\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.348880 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20a18862-6cbd-4fb1-9d69-ae768e0afddd-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmswj\" (UID: \"20a18862-6cbd-4fb1-9d69-ae768e0afddd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.349401 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmhxd\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-kube-api-access-pmhxd\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.358983 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.365206 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.367783 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdsc5\" (UniqueName: \"kubernetes.io/projected/746299dc-637f-42a3-ad0d-0de202bae64e-kube-api-access-cdsc5\") pod \"cluster-samples-operator-665b6dd947-j8fg8\" (UID: \"746299dc-637f-42a3-ad0d-0de202bae64e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.388526 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg8nf\" (UniqueName: \"kubernetes.io/projected/e69d69b3-8e9f-4413-93c1-3c1f77388221-kube-api-access-tg8nf\") pod \"package-server-manager-789f6589d5-mt2l6\" (UID: \"e69d69b3-8e9f-4413-93c1-3c1f77388221\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.424922 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzhhf\" (UniqueName: \"kubernetes.io/projected/47b7dc89-8538-41f1-b569-a2b6dcbf8f13-kube-api-access-lzhhf\") pod \"console-operator-58897d9998-twxgh\" (UID: \"47b7dc89-8538-41f1-b569-a2b6dcbf8f13\") " pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434264 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.434431 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:26.934393889 +0000 UTC m=+142.967716458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434537 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/73661022-0008-4452-b140-f0a75e4c40c7-proxy-tls\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434584 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kwd8z\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434608 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-socket-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434627 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kwd8z\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434823 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2876c7dd-5979-49eb-ab61-8ffce07376b2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jv9qx\" (UID: \"2876c7dd-5979-49eb-ab61-8ffce07376b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434858 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9102ddf1-e140-48e7-9ecd-14a4c007f5d5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mbxqf\" (UID: \"9102ddf1-e140-48e7-9ecd-14a4c007f5d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434899 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5kdx\" (UniqueName: \"kubernetes.io/projected/8a05e6ff-179f-4a04-9fc2-524e31980467-kube-api-access-c5kdx\") pod \"dns-default-x2l69\" (UID: \"8a05e6ff-179f-4a04-9fc2-524e31980467\") " pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434919 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46582f7f-c6b0-4ae3-9103-4a4754304438-secret-volume\") pod \"collect-profiles-29524635-psnb6\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434937 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4f848507-d616-4d06-885f-d84210d9b4a0-etcd-service-ca\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434951 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qltv6\" (UniqueName: \"kubernetes.io/projected/bdad60bd-8af5-439a-a62e-edf676281c47-kube-api-access-qltv6\") pod \"service-ca-9c57cc56f-gv2pd\" (UID: \"bdad60bd-8af5-439a-a62e-edf676281c47\") " pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.434986 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-registration-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.435003 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bdad60bd-8af5-439a-a62e-edf676281c47-signing-cabundle\") pod \"service-ca-9c57cc56f-gv2pd\" (UID: \"bdad60bd-8af5-439a-a62e-edf676281c47\") " pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.435029 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g82wz\" (UniqueName: \"kubernetes.io/projected/f5a5c8b4-57c3-43fc-a404-2754d0e70c50-kube-api-access-g82wz\") pod \"ingress-canary-7q9qf\" (UID: \"f5a5c8b4-57c3-43fc-a404-2754d0e70c50\") " pod="openshift-ingress-canary/ingress-canary-7q9qf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.435080 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a05e6ff-179f-4a04-9fc2-524e31980467-config-volume\") pod \"dns-default-x2l69\" (UID: \"8a05e6ff-179f-4a04-9fc2-524e31980467\") " pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.435098 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2876c7dd-5979-49eb-ab61-8ffce07376b2-srv-cert\") pod \"olm-operator-6b444d44fb-jv9qx\" (UID: \"2876c7dd-5979-49eb-ab61-8ffce07376b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.435133 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/73661022-0008-4452-b140-f0a75e4c40c7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.435158 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fdhm\" (UniqueName: \"kubernetes.io/projected/4087f246-2160-469e-8ad1-d88c147ff7c0-kube-api-access-9fdhm\") pod \"multus-admission-controller-857f4d67dd-xfb4j\" (UID: \"4087f246-2160-469e-8ad1-d88c147ff7c0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.435176 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/056df788-349b-4549-88ab-66bbc2ff6afb-node-bootstrap-token\") pod \"machine-config-server-znn5k\" (UID: \"056df788-349b-4549-88ab-66bbc2ff6afb\") " pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.435214 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f848507-d616-4d06-885f-d84210d9b4a0-serving-cert\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.435233 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1fe71123-0d33-41fa-b582-02d70177d0f0-srv-cert\") pod \"catalog-operator-68c6474976-x52wm\" (UID: \"1fe71123-0d33-41fa-b582-02d70177d0f0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.435259 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx5fm\" (UniqueName: \"kubernetes.io/projected/ab107439-3fd5-41e7-9d30-71962fc96028-kube-api-access-wx5fm\") pod \"migrator-59844c95c7-sppcx\" (UID: \"ab107439-3fd5-41e7-9d30-71962fc96028\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437000 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-plugins-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437148 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4087f246-2160-469e-8ad1-d88c147ff7c0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xfb4j\" (UID: \"4087f246-2160-469e-8ad1-d88c147ff7c0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437210 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v76p\" (UniqueName: \"kubernetes.io/projected/562c18aa-5aed-4f1e-95f5-da1fe7c02523-kube-api-access-4v76p\") pod \"marketplace-operator-79b997595-kwd8z\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437240 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1fe71123-0d33-41fa-b582-02d70177d0f0-profile-collector-cert\") pod \"catalog-operator-68c6474976-x52wm\" (UID: \"1fe71123-0d33-41fa-b582-02d70177d0f0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437265 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz9rv\" (UniqueName: \"kubernetes.io/projected/59cc3a77-bf98-42ed-98d8-a921b7039c6f-kube-api-access-cz9rv\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437332 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4f848507-d616-4d06-885f-d84210d9b4a0-etcd-ca\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437351 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c24k9\" (UniqueName: \"kubernetes.io/projected/46582f7f-c6b0-4ae3-9103-4a4754304438-kube-api-access-c24k9\") pod \"collect-profiles-29524635-psnb6\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437370 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bn2r\" (UniqueName: \"kubernetes.io/projected/056df788-349b-4549-88ab-66bbc2ff6afb-kube-api-access-6bn2r\") pod \"machine-config-server-znn5k\" (UID: \"056df788-349b-4549-88ab-66bbc2ff6afb\") " pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437409 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-csi-data-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437427 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwl8b\" (UniqueName: \"kubernetes.io/projected/1fe71123-0d33-41fa-b582-02d70177d0f0-kube-api-access-zwl8b\") pod \"catalog-operator-68c6474976-x52wm\" (UID: \"1fe71123-0d33-41fa-b582-02d70177d0f0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437547 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4f848507-d616-4d06-885f-d84210d9b4a0-etcd-client\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437569 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-mountpoint-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437585 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/73661022-0008-4452-b140-f0a75e4c40c7-images\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437627 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f848507-d616-4d06-885f-d84210d9b4a0-config\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437649 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437668 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bdad60bd-8af5-439a-a62e-edf676281c47-signing-key\") pod \"service-ca-9c57cc56f-gv2pd\" (UID: \"bdad60bd-8af5-439a-a62e-edf676281c47\") " pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437706 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcl96\" (UniqueName: \"kubernetes.io/projected/4f848507-d616-4d06-885f-d84210d9b4a0-kube-api-access-xcl96\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437733 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46582f7f-c6b0-4ae3-9103-4a4754304438-config-volume\") pod \"collect-profiles-29524635-psnb6\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437758 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nwdd\" (UniqueName: \"kubernetes.io/projected/9102ddf1-e140-48e7-9ecd-14a4c007f5d5-kube-api-access-5nwdd\") pod \"control-plane-machine-set-operator-78cbb6b69f-mbxqf\" (UID: \"9102ddf1-e140-48e7-9ecd-14a4c007f5d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437796 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45jfl\" (UniqueName: \"kubernetes.io/projected/73661022-0008-4452-b140-f0a75e4c40c7-kube-api-access-45jfl\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437832 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a05e6ff-179f-4a04-9fc2-524e31980467-metrics-tls\") pod \"dns-default-x2l69\" (UID: \"8a05e6ff-179f-4a04-9fc2-524e31980467\") " pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437868 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzf97\" (UniqueName: \"kubernetes.io/projected/2876c7dd-5979-49eb-ab61-8ffce07376b2-kube-api-access-kzf97\") pod \"olm-operator-6b444d44fb-jv9qx\" (UID: \"2876c7dd-5979-49eb-ab61-8ffce07376b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437899 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5a5c8b4-57c3-43fc-a404-2754d0e70c50-cert\") pod \"ingress-canary-7q9qf\" (UID: \"f5a5c8b4-57c3-43fc-a404-2754d0e70c50\") " pod="openshift-ingress-canary/ingress-canary-7q9qf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.437915 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/056df788-349b-4549-88ab-66bbc2ff6afb-certs\") pod \"machine-config-server-znn5k\" (UID: \"056df788-349b-4549-88ab-66bbc2ff6afb\") " pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.438256 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a05e6ff-179f-4a04-9fc2-524e31980467-config-volume\") pod \"dns-default-x2l69\" (UID: \"8a05e6ff-179f-4a04-9fc2-524e31980467\") " pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.438360 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bdad60bd-8af5-439a-a62e-edf676281c47-signing-cabundle\") pod \"service-ca-9c57cc56f-gv2pd\" (UID: \"bdad60bd-8af5-439a-a62e-edf676281c47\") " pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.438467 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-registration-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.436496 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-socket-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.438715 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kwd8z\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.438763 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-plugins-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.438969 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-csi-data-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.439555 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4f848507-d616-4d06-885f-d84210d9b4a0-etcd-service-ca\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.439846 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/73661022-0008-4452-b140-f0a75e4c40c7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.441780 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46582f7f-c6b0-4ae3-9103-4a4754304438-config-volume\") pod \"collect-profiles-29524635-psnb6\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.441824 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4f848507-d616-4d06-885f-d84210d9b4a0-etcd-client\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.442814 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/73661022-0008-4452-b140-f0a75e4c40c7-images\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.443177 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2876c7dd-5979-49eb-ab61-8ffce07376b2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jv9qx\" (UID: \"2876c7dd-5979-49eb-ab61-8ffce07376b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.443404 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kwd8z\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.443414 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2876c7dd-5979-49eb-ab61-8ffce07376b2-srv-cert\") pod \"olm-operator-6b444d44fb-jv9qx\" (UID: \"2876c7dd-5979-49eb-ab61-8ffce07376b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.443575 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/59cc3a77-bf98-42ed-98d8-a921b7039c6f-mountpoint-dir\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.443823 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/056df788-349b-4549-88ab-66bbc2ff6afb-certs\") pod \"machine-config-server-znn5k\" (UID: \"056df788-349b-4549-88ab-66bbc2ff6afb\") " pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.444327 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4f848507-d616-4d06-885f-d84210d9b4a0-etcd-ca\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.445224 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5a5c8b4-57c3-43fc-a404-2754d0e70c50-cert\") pod \"ingress-canary-7q9qf\" (UID: \"f5a5c8b4-57c3-43fc-a404-2754d0e70c50\") " pod="openshift-ingress-canary/ingress-canary-7q9qf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.445228 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9102ddf1-e140-48e7-9ecd-14a4c007f5d5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mbxqf\" (UID: \"9102ddf1-e140-48e7-9ecd-14a4c007f5d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.445712 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:26.945695729 +0000 UTC m=+142.979018298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.445860 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a05e6ff-179f-4a04-9fc2-524e31980467-metrics-tls\") pod \"dns-default-x2l69\" (UID: \"8a05e6ff-179f-4a04-9fc2-524e31980467\") " pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.446119 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bdad60bd-8af5-439a-a62e-edf676281c47-signing-key\") pod \"service-ca-9c57cc56f-gv2pd\" (UID: \"bdad60bd-8af5-439a-a62e-edf676281c47\") " pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.446339 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4087f246-2160-469e-8ad1-d88c147ff7c0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xfb4j\" (UID: \"4087f246-2160-469e-8ad1-d88c147ff7c0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.446871 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46582f7f-c6b0-4ae3-9103-4a4754304438-secret-volume\") pod \"collect-profiles-29524635-psnb6\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.447162 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/73661022-0008-4452-b140-f0a75e4c40c7-proxy-tls\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.447191 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f848507-d616-4d06-885f-d84210d9b4a0-config\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.448349 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f848507-d616-4d06-885f-d84210d9b4a0-serving-cert\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.450845 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1fe71123-0d33-41fa-b582-02d70177d0f0-profile-collector-cert\") pod \"catalog-operator-68c6474976-x52wm\" (UID: \"1fe71123-0d33-41fa-b582-02d70177d0f0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.451518 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw2l6\" (UniqueName: \"kubernetes.io/projected/ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88-kube-api-access-lw2l6\") pod \"packageserver-d55dfcdfc-t22fw\" (UID: \"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.451764 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1fe71123-0d33-41fa-b582-02d70177d0f0-srv-cert\") pod \"catalog-operator-68c6474976-x52wm\" (UID: \"1fe71123-0d33-41fa-b582-02d70177d0f0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.453118 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/056df788-349b-4549-88ab-66bbc2ff6afb-node-bootstrap-token\") pod \"machine-config-server-znn5k\" (UID: \"056df788-349b-4549-88ab-66bbc2ff6afb\") " pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.464628 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6383e6d2-7e9e-4927-a55a-f574e48d316d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xg4d5\" (UID: \"6383e6d2-7e9e-4927-a55a-f574e48d316d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.472820 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp"] Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.485512 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-bound-sa-token\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.502819 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20a18862-6cbd-4fb1-9d69-ae768e0afddd-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-jmswj\" (UID: \"20a18862-6cbd-4fb1-9d69-ae768e0afddd\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.526366 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crvb2\" (UniqueName: \"kubernetes.io/projected/af89e320-2661-4860-8079-0c1ff810d97a-kube-api-access-crvb2\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.544499 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.544851 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvh75\" (UniqueName: \"kubernetes.io/projected/053058a2-c542-41f4-b393-1be45501cfa9-kube-api-access-rvh75\") pod \"openshift-controller-manager-operator-756b6f6bc6-v57rh\" (UID: \"053058a2-c542-41f4-b393-1be45501cfa9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.544929 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.044916445 +0000 UTC m=+143.078239014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.561332 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-mlxbg"] Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.567492 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/af89e320-2661-4860-8079-0c1ff810d97a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dgcbg\" (UID: \"af89e320-2661-4860-8079-0c1ff810d97a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: W0219 05:27:26.570161 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ff8f20f_5302_4b7a_826c_5d557c65c0f3.slice/crio-a6f2569260b6928a746b0541013161dd385ea0ab1aad5d9524e6efae3299b362 WatchSource:0}: Error finding container a6f2569260b6928a746b0541013161dd385ea0ab1aad5d9524e6efae3299b362: Status 404 returned error can't find the container with id a6f2569260b6928a746b0541013161dd385ea0ab1aad5d9524e6efae3299b362 Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.582990 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft4kq\" (UniqueName: \"kubernetes.io/projected/9d9907b5-e862-4242-b233-ed39e5de515a-kube-api-access-ft4kq\") pod \"openshift-config-operator-7777fb866f-9kvdd\" (UID: \"9d9907b5-e862-4242-b233-ed39e5de515a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.600111 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.602857 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-975f8\" (UniqueName: \"kubernetes.io/projected/c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8-kube-api-access-975f8\") pod \"router-default-5444994796-xphkg\" (UID: \"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8\") " pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.604920 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.613277 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.617567 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" event={"ID":"af251e39-e77d-4cf8-a359-02645dc98b38","Type":"ContainerStarted","Data":"04a0479ceca2d7c3d98f48841a72ac8f9cdcfe7a51cd069a0da195c69a50dcc4"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.617615 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" event={"ID":"af251e39-e77d-4cf8-a359-02645dc98b38","Type":"ContainerStarted","Data":"16428c123ec8b757275908431734a8ff065b22d9c78cd4eb6cac9268a1b80501"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.619789 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" event={"ID":"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37","Type":"ContainerStarted","Data":"fdfd96a1742cbc885fb02908713154e8eeafc0be605934b38fea0b959dfb94fa"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.619854 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" event={"ID":"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37","Type":"ContainerStarted","Data":"6bcb27d02c242e50f41dff5d3edfb6d23e7b0ec6741fafd6e98e30f973688d1a"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.619865 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" event={"ID":"5c537eae-5a27-4a4d-ba9e-0fd7efe72f37","Type":"ContainerStarted","Data":"0d7741dd8935ca80837ae4f1d3e7c159a96896d5f1a49cea2f52d5089d348d51"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.620748 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx6d8\" (UniqueName: \"kubernetes.io/projected/c4edd2db-a884-46ac-9a12-0cd2a5daaeb5-kube-api-access-dx6d8\") pod \"downloads-7954f5f757-tjxj6\" (UID: \"c4edd2db-a884-46ac-9a12-0cd2a5daaeb5\") " pod="openshift-console/downloads-7954f5f757-tjxj6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.624242 5012 generic.go:334] "Generic (PLEG): container finished" podID="462e6b9c-5e51-439d-aee8-9e7651b8c35a" containerID="c648ff6aec50cfe6e7d2a4e378014c657c533b03b6123ec965f8259cf201507b" exitCode=0 Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.624294 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" event={"ID":"462e6b9c-5e51-439d-aee8-9e7651b8c35a","Type":"ContainerDied","Data":"c648ff6aec50cfe6e7d2a4e378014c657c533b03b6123ec965f8259cf201507b"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.624363 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" event={"ID":"462e6b9c-5e51-439d-aee8-9e7651b8c35a","Type":"ContainerStarted","Data":"0a9f48a203a0a3a4a70e04f238a6e34e7595f66b91ff57dc3097a43f3ff6ddfa"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.625888 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.626957 5012 generic.go:334] "Generic (PLEG): container finished" podID="4888722d-d5dd-4748-ac7b-a1d11ba08e6e" containerID="8f1c467ab4f27a880f493ba53ce7139248e78218a82a593f1eae696eaccae534" exitCode=0 Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.627060 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" event={"ID":"4888722d-d5dd-4748-ac7b-a1d11ba08e6e","Type":"ContainerDied","Data":"8f1c467ab4f27a880f493ba53ce7139248e78218a82a593f1eae696eaccae534"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.627128 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" event={"ID":"4888722d-d5dd-4748-ac7b-a1d11ba08e6e","Type":"ContainerStarted","Data":"dceabac4fd3c41899884d1330da27bd5b20c6eac03e5235cf07da17192b3dc26"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.631384 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mlxbg" event={"ID":"5ff8f20f-5302-4b7a-826c-5d557c65c0f3","Type":"ContainerStarted","Data":"a6f2569260b6928a746b0541013161dd385ea0ab1aad5d9524e6efae3299b362"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.638395 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" event={"ID":"1387b34e-3233-49a1-9e37-ef1e7f4fb660","Type":"ContainerStarted","Data":"40ecc847082b185c9f5608425745da5dba3e87711df30e0f64bb96d2e7855856"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.642249 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.643584 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" event={"ID":"7e9dd710-d0ec-443f-a081-b18c4b6abe36","Type":"ContainerStarted","Data":"d41d8bd2ca6cc54e0495b26c42ee87c5303f40e928d5ca5c25add9b16457d3a2"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.643649 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.643663 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" event={"ID":"7e9dd710-d0ec-443f-a081-b18c4b6abe36","Type":"ContainerStarted","Data":"1ee3dd9b34ee54e0754750a439b4590af9a0a688e92512f756cbea34daf382ca"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.645522 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2ef24f0-0d7d-4d25-a839-b650893a8332-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.646341 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.646693 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.146679891 +0000 UTC m=+143.180002460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.647121 5012 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-ntrlp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.647166 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" podUID="7e9dd710-d0ec-443f-a081-b18c4b6abe36" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.649700 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.653651 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" event={"ID":"5bb4ce13-477c-4c8d-89b5-0d6cc099095c","Type":"ContainerStarted","Data":"dc79225825a23c7457dc0cbf7bbf75007f42f24dffaccf93abbdc2e0d2881172"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.665746 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" event={"ID":"89f1d0f3-c220-4668-b822-3b20b64ebfb8","Type":"ContainerStarted","Data":"0ee8e83714534126962abe0549581114f5bc02b2fbc1bd415c2917a0b2e51cc4"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.665790 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" event={"ID":"89f1d0f3-c220-4668-b822-3b20b64ebfb8","Type":"ContainerStarted","Data":"5003562696efaf86d8b690a85cdcf58c161a34b94a16cc2ce64a20964ec94127"} Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.666068 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.667788 5012 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mn4f2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.667835 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" podUID="89f1d0f3-c220-4668-b822-3b20b64ebfb8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.672536 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.675766 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btf46\" (UniqueName: \"kubernetes.io/projected/c2ef24f0-0d7d-4d25-a839-b650893a8332-kube-api-access-btf46\") pod \"cluster-image-registry-operator-dc59b4c8b-qg5kd\" (UID: \"c2ef24f0-0d7d-4d25-a839-b650893a8332\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.708197 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.708688 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmlnj\" (UniqueName: \"kubernetes.io/projected/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-kube-api-access-pmlnj\") pod \"oauth-openshift-558db77b4-6mmvm\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.710785 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.717277 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gdlm\" (UniqueName: \"kubernetes.io/projected/a3d6e827-2fd3-4026-8bbb-b6336cf7c020-kube-api-access-2gdlm\") pod \"dns-operator-744455d44c-5lz5f\" (UID: \"a3d6e827-2fd3-4026-8bbb-b6336cf7c020\") " pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.748884 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.749748 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.249673441 +0000 UTC m=+143.282996010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.752095 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.752686 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.252665283 +0000 UTC m=+143.285987932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.792937 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng"] Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.806731 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7"] Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.829042 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bn2r\" (UniqueName: \"kubernetes.io/projected/056df788-349b-4549-88ab-66bbc2ff6afb-kube-api-access-6bn2r\") pod \"machine-config-server-znn5k\" (UID: \"056df788-349b-4549-88ab-66bbc2ff6afb\") " pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.831355 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-znn5k" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.839895 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jq8v\" (UniqueName: \"kubernetes.io/projected/6ff5220b-0304-48dc-b2eb-e2bd2a2c8205-kube-api-access-6jq8v\") pod \"service-ca-operator-777779d784-gwtrd\" (UID: \"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.844361 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5kdx\" (UniqueName: \"kubernetes.io/projected/8a05e6ff-179f-4a04-9fc2-524e31980467-kube-api-access-c5kdx\") pod \"dns-default-x2l69\" (UID: \"8a05e6ff-179f-4a04-9fc2-524e31980467\") " pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.844948 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g82wz\" (UniqueName: \"kubernetes.io/projected/f5a5c8b4-57c3-43fc-a404-2754d0e70c50-kube-api-access-g82wz\") pod \"ingress-canary-7q9qf\" (UID: \"f5a5c8b4-57c3-43fc-a404-2754d0e70c50\") " pod="openshift-ingress-canary/ingress-canary-7q9qf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.844983 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qltv6\" (UniqueName: \"kubernetes.io/projected/bdad60bd-8af5-439a-a62e-edf676281c47-kube-api-access-qltv6\") pod \"service-ca-9c57cc56f-gv2pd\" (UID: \"bdad60bd-8af5-439a-a62e-edf676281c47\") " pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.848748 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c24k9\" (UniqueName: \"kubernetes.io/projected/46582f7f-c6b0-4ae3-9103-4a4754304438-kube-api-access-c24k9\") pod \"collect-profiles-29524635-psnb6\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:26 crc kubenswrapper[5012]: W0219 05:27:26.850627 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc75dab1e_8eb0_42e5_bc33_f0bf1ebb3dd8.slice/crio-3cc18f037d08f528d9486f30b54c03b3bc2a368afd5e2a0f8f65abd0b01d01b2 WatchSource:0}: Error finding container 3cc18f037d08f528d9486f30b54c03b3bc2a368afd5e2a0f8f65abd0b01d01b2: Status 404 returned error can't find the container with id 3cc18f037d08f528d9486f30b54c03b3bc2a368afd5e2a0f8f65abd0b01d01b2 Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.857067 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.857128 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:26 crc kubenswrapper[5012]: W0219 05:27:26.857692 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod056df788_349b_4549_88ab_66bbc2ff6afb.slice/crio-a50a30afb10e585dd6a545d1b8c076b53501b2afc629b3e33d2e3eb0b6e3ec66 WatchSource:0}: Error finding container a50a30afb10e585dd6a545d1b8c076b53501b2afc629b3e33d2e3eb0b6e3ec66: Status 404 returned error can't find the container with id a50a30afb10e585dd6a545d1b8c076b53501b2afc629b3e33d2e3eb0b6e3ec66 Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.858920 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.358892582 +0000 UTC m=+143.392215151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.863013 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzf97\" (UniqueName: \"kubernetes.io/projected/2876c7dd-5979-49eb-ab61-8ffce07376b2-kube-api-access-kzf97\") pod \"olm-operator-6b444d44fb-jv9qx\" (UID: \"2876c7dd-5979-49eb-ab61-8ffce07376b2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.865332 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.865797 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.36578477 +0000 UTC m=+143.399107339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.869290 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.876355 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.886702 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx5fm\" (UniqueName: \"kubernetes.io/projected/ab107439-3fd5-41e7-9d30-71962fc96028-kube-api-access-wx5fm\") pod \"migrator-59844c95c7-sppcx\" (UID: \"ab107439-3fd5-41e7-9d30-71962fc96028\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.896126 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.907746 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwl8b\" (UniqueName: \"kubernetes.io/projected/1fe71123-0d33-41fa-b582-02d70177d0f0-kube-api-access-zwl8b\") pod \"catalog-operator-68c6474976-x52wm\" (UID: \"1fe71123-0d33-41fa-b582-02d70177d0f0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.937587 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-tjxj6" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.943942 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nwdd\" (UniqueName: \"kubernetes.io/projected/9102ddf1-e140-48e7-9ecd-14a4c007f5d5-kube-api-access-5nwdd\") pod \"control-plane-machine-set-operator-78cbb6b69f-mbxqf\" (UID: \"9102ddf1-e140-48e7-9ecd-14a4c007f5d5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.958107 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcl96\" (UniqueName: \"kubernetes.io/projected/4f848507-d616-4d06-885f-d84210d9b4a0-kube-api-access-xcl96\") pod \"etcd-operator-b45778765-gwx52\" (UID: \"4f848507-d616-4d06-885f-d84210d9b4a0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.964548 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8"] Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.966042 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:26 crc kubenswrapper[5012]: E0219 05:27:26.966471 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.466455517 +0000 UTC m=+143.499778086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.981116 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.981516 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45jfl\" (UniqueName: \"kubernetes.io/projected/73661022-0008-4452-b140-f0a75e4c40c7-kube-api-access-45jfl\") pod \"machine-config-operator-74547568cd-rx82w\" (UID: \"73661022-0008-4452-b140-f0a75e4c40c7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:26 crc kubenswrapper[5012]: I0219 05:27:26.993075 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fdhm\" (UniqueName: \"kubernetes.io/projected/4087f246-2160-469e-8ad1-d88c147ff7c0-kube-api-access-9fdhm\") pod \"multus-admission-controller-857f4d67dd-xfb4j\" (UID: \"4087f246-2160-469e-8ad1-d88c147ff7c0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.018914 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.025317 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.030394 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.032604 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v76p\" (UniqueName: \"kubernetes.io/projected/562c18aa-5aed-4f1e-95f5-da1fe7c02523-kube-api-access-4v76p\") pod \"marketplace-operator-79b997595-kwd8z\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.034732 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz9rv\" (UniqueName: \"kubernetes.io/projected/59cc3a77-bf98-42ed-98d8-a921b7039c6f-kube-api-access-cz9rv\") pod \"csi-hostpathplugin-n8t75\" (UID: \"59cc3a77-bf98-42ed-98d8-a921b7039c6f\") " pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.035746 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.053599 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.056002 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.062789 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.067295 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.067636 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.567623616 +0000 UTC m=+143.600946185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.067977 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.079293 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7q9qf" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.081546 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg"] Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.095936 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.096171 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.101952 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.155705 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-n8t75" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.186184 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.186963 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.686937743 +0000 UTC m=+143.720260312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.188270 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.188669 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.68865779 +0000 UTC m=+143.721980359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.269163 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tnq42" podStartSLOduration=122.269146754 podStartE2EDuration="2m2.269146754s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:27.268402894 +0000 UTC m=+143.301725463" watchObservedRunningTime="2026-02-19 05:27:27.269146754 +0000 UTC m=+143.302469323" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.289092 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.289550 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.789508701 +0000 UTC m=+143.822831270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.358356 5012 csr.go:261] certificate signing request csr-tdkx2 is approved, waiting to be issued Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.361875 5012 csr.go:257] certificate signing request csr-tdkx2 is issued Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.398548 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.399261 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:27.899249435 +0000 UTC m=+143.932572004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.499662 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.501655 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.001618658 +0000 UTC m=+144.034941227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.520288 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-twxgh"] Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.546353 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj"] Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.602978 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.603346 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.103329603 +0000 UTC m=+144.136652172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.703417 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.703814 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.203790423 +0000 UTC m=+144.237112992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.703984 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.704234 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.204223565 +0000 UTC m=+144.237546134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.717616 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xphkg" event={"ID":"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8","Type":"ContainerStarted","Data":"3cc18f037d08f528d9486f30b54c03b3bc2a368afd5e2a0f8f65abd0b01d01b2"} Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.724399 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" event={"ID":"462e6b9c-5e51-439d-aee8-9e7651b8c35a","Type":"ContainerStarted","Data":"0c0f045ad8f20f4d2fa1d08ae6232c30b02c2dba6fbfca9cb8ffdaba769ddb7c"} Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.730414 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" event={"ID":"726872cb-1000-4656-beea-2bd59752199c","Type":"ContainerStarted","Data":"89a60f47a5cdc0e142d383055eb74d5054314b0d415d54821b99d42ee41fc662"} Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.734291 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" event={"ID":"af89e320-2661-4860-8079-0c1ff810d97a","Type":"ContainerStarted","Data":"8b47b8c80257fd82dfb429e3a88ae831fb90a10c1e5ddc7437102c4a757ab2fa"} Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.750458 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-thnmn" podStartSLOduration=122.75044415 podStartE2EDuration="2m2.75044415s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:27.749728261 +0000 UTC m=+143.783050830" watchObservedRunningTime="2026-02-19 05:27:27.75044415 +0000 UTC m=+143.783766719" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.756419 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-twxgh" event={"ID":"47b7dc89-8538-41f1-b569-a2b6dcbf8f13","Type":"ContainerStarted","Data":"b9cfdf3cd72a843ab182956639ea87e0e4240a6e9a52d11112a66cd54b11b830"} Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.771628 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" event={"ID":"4888722d-d5dd-4748-ac7b-a1d11ba08e6e","Type":"ContainerStarted","Data":"4166b6fa7d423d9ae3f38af577e38ae7eabc1c871fd1a57bc3c8cb70d637aac8"} Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.773565 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" event={"ID":"78dedde0-cb75-4ee7-8735-e6f071a02b10","Type":"ContainerStarted","Data":"ec3759604eb72995a560525895feb2bb0e0e487cb41ebd60f3cbb1221dee904b"} Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.774757 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" event={"ID":"1387b34e-3233-49a1-9e37-ef1e7f4fb660","Type":"ContainerStarted","Data":"fa42c931d54961e0da972dd6d69040379a570a0d580e32912d21ba279f686879"} Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.777382 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mlxbg" event={"ID":"5ff8f20f-5302-4b7a-826c-5d557c65c0f3","Type":"ContainerStarted","Data":"cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737"} Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.778689 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-znn5k" event={"ID":"056df788-349b-4549-88ab-66bbc2ff6afb","Type":"ContainerStarted","Data":"a50a30afb10e585dd6a545d1b8c076b53501b2afc629b3e33d2e3eb0b6e3ec66"} Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.803694 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.805126 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.806641 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.807554 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.307536824 +0000 UTC m=+144.340859383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.846889 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kjwlb" podStartSLOduration=122.846874551 podStartE2EDuration="2m2.846874551s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:27.797499329 +0000 UTC m=+143.830821898" watchObservedRunningTime="2026-02-19 05:27:27.846874551 +0000 UTC m=+143.880197120" Feb 19 05:27:27 crc kubenswrapper[5012]: I0219 05:27:27.918211 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:27 crc kubenswrapper[5012]: E0219 05:27:27.919542 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.41953116 +0000 UTC m=+144.452853729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.019391 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.024508 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.524469443 +0000 UTC m=+144.557792012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.038631 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.039084 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.539067633 +0000 UTC m=+144.572390202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.139628 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.140011 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.639984976 +0000 UTC m=+144.673307545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.140355 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.140791 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.640771337 +0000 UTC m=+144.674093906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.241840 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.242187 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.742172254 +0000 UTC m=+144.775494823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.267217 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" podStartSLOduration=122.267202859 podStartE2EDuration="2m2.267202859s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:28.266084038 +0000 UTC m=+144.299406607" watchObservedRunningTime="2026-02-19 05:27:28.267202859 +0000 UTC m=+144.300525428" Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.348609 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.349167 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.849124732 +0000 UTC m=+144.882447301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.364917 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-19 05:22:27 +0000 UTC, rotation deadline is 2026-11-04 07:12:59.950814456 +0000 UTC Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.365011 5012 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6193h45m31.585806979s for next certificate rotation Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.454508 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.455199 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:28.955183486 +0000 UTC m=+144.988506055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.467103 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" podStartSLOduration=122.467087331 podStartE2EDuration="2m2.467087331s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:28.46556984 +0000 UTC m=+144.498892409" watchObservedRunningTime="2026-02-19 05:27:28.467087331 +0000 UTC m=+144.500409900" Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.555923 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.556224 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.056213002 +0000 UTC m=+145.089535561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.660229 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.660716 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.160700672 +0000 UTC m=+145.194023241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.762682 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.763513 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.263501477 +0000 UTC m=+145.296824046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.785010 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-6qvzq" podStartSLOduration=122.784995225 podStartE2EDuration="2m2.784995225s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:28.752689111 +0000 UTC m=+144.786011680" watchObservedRunningTime="2026-02-19 05:27:28.784995225 +0000 UTC m=+144.818317794" Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.796615 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" event={"ID":"4888722d-d5dd-4748-ac7b-a1d11ba08e6e","Type":"ContainerStarted","Data":"549b548692a811da65826548279e6fac63057af4d2030aa48751c4b6e8815a66"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.796650 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" event={"ID":"726872cb-1000-4656-beea-2bd59752199c","Type":"ContainerStarted","Data":"7a99f0faafc66c81105a0f44329bb5ed7a91b8a3f7f5fe2bf0a012a510280b67"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.796663 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" event={"ID":"726872cb-1000-4656-beea-2bd59752199c","Type":"ContainerStarted","Data":"c8ab7df28adc5a4f3aa17eda45aae326defdb2a0480e7720dd4a1ee12b84030c"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.796673 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" event={"ID":"746299dc-637f-42a3-ad0d-0de202bae64e","Type":"ContainerStarted","Data":"6b6fe6b1308e42932c8727b67e3f9826feba304d9b2b38808224f1ecf2123c5f"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.796688 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" event={"ID":"746299dc-637f-42a3-ad0d-0de202bae64e","Type":"ContainerStarted","Data":"aef3f050ec04f629fbd1ef28d8100a641bbc5c54c1fd99199ea8d822d14d4fed"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.796697 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-znn5k" event={"ID":"056df788-349b-4549-88ab-66bbc2ff6afb","Type":"ContainerStarted","Data":"d494240f5e1e2065a546720a470c0c0e0ef27c2a7f601397d3fedb3284413b4f"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.796710 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xphkg" event={"ID":"c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8","Type":"ContainerStarted","Data":"4f7f1fb2e067945b8e4ce6e249db27ab0fcb08e71d70e36228c6f4b6b12bfa67"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.801442 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" event={"ID":"78dedde0-cb75-4ee7-8735-e6f071a02b10","Type":"ContainerStarted","Data":"16184c5994c7790a21ed9a5edb83dab3783976ba6a833b0ab5bb8f3684f4c903"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.803622 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" event={"ID":"20a18862-6cbd-4fb1-9d69-ae768e0afddd","Type":"ContainerStarted","Data":"310c06334c510410ec234d0849b19c6d1a48feed1c6926f3e5f3d29738a0ace3"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.803665 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" event={"ID":"20a18862-6cbd-4fb1-9d69-ae768e0afddd","Type":"ContainerStarted","Data":"12f0cb882a65d9ad07cedd67504bd7624b01d9934ca70532e1520bd44010cea1"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.805691 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-twxgh" event={"ID":"47b7dc89-8538-41f1-b569-a2b6dcbf8f13","Type":"ContainerStarted","Data":"15440b59cd23f4ccbdccba4cd40eff97e7d8dc84759b3a02a5b2b1a6f479c41b"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.806372 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.808317 5012 patch_prober.go:28] interesting pod/console-operator-58897d9998-twxgh container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.808377 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-twxgh" podUID="47b7dc89-8538-41f1-b569-a2b6dcbf8f13" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.811261 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" event={"ID":"af89e320-2661-4860-8079-0c1ff810d97a","Type":"ContainerStarted","Data":"0dec7d93893fdb279666f064114255cb117415740e10ead52a54afd0bc425909"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.811327 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" event={"ID":"af89e320-2661-4860-8079-0c1ff810d97a","Type":"ContainerStarted","Data":"dbaf694982512e24ab349494907703f0de521dff3496fd8613b1b7de21123d57"} Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.868896 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.870471 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.370447695 +0000 UTC m=+145.403770264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.880654 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" podStartSLOduration=123.880638834 podStartE2EDuration="2m3.880638834s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:28.88050483 +0000 UTC m=+144.913827399" watchObservedRunningTime="2026-02-19 05:27:28.880638834 +0000 UTC m=+144.913961403" Feb 19 05:27:28 crc kubenswrapper[5012]: I0219 05:27:28.970576 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:28 crc kubenswrapper[5012]: E0219 05:27:28.973417 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.473402204 +0000 UTC m=+145.506724773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.007904 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ccstp" podStartSLOduration=123.007885228 podStartE2EDuration="2m3.007885228s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:28.9805538 +0000 UTC m=+145.013876369" watchObservedRunningTime="2026-02-19 05:27:29.007885228 +0000 UTC m=+145.041207787" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.043755 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dhgng" podStartSLOduration=123.04373618 podStartE2EDuration="2m3.04373618s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.015003993 +0000 UTC m=+145.048326562" watchObservedRunningTime="2026-02-19 05:27:29.04373618 +0000 UTC m=+145.077058749" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.072797 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:29 crc kubenswrapper[5012]: E0219 05:27:29.072900 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.572886228 +0000 UTC m=+145.606208797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.073082 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:29 crc kubenswrapper[5012]: E0219 05:27:29.073386 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.573378341 +0000 UTC m=+145.606700910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.076705 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-jmswj" podStartSLOduration=123.076692762 podStartE2EDuration="2m3.076692762s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.044228163 +0000 UTC m=+145.077550732" watchObservedRunningTime="2026-02-19 05:27:29.076692762 +0000 UTC m=+145.110015331" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.078899 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-znn5k" podStartSLOduration=5.078892732 podStartE2EDuration="5.078892732s" podCreationTimestamp="2026-02-19 05:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.077742191 +0000 UTC m=+145.111064760" watchObservedRunningTime="2026-02-19 05:27:29.078892732 +0000 UTC m=+145.112215301" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.103965 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-mlxbg" podStartSLOduration=124.103951138 podStartE2EDuration="2m4.103951138s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.102827307 +0000 UTC m=+145.136149876" watchObservedRunningTime="2026-02-19 05:27:29.103951138 +0000 UTC m=+145.137273707" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.175712 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:29 crc kubenswrapper[5012]: E0219 05:27:29.176360 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.67633481 +0000 UTC m=+145.709657379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.221200 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" podStartSLOduration=123.221183508 podStartE2EDuration="2m3.221183508s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.185598124 +0000 UTC m=+145.218920693" watchObservedRunningTime="2026-02-19 05:27:29.221183508 +0000 UTC m=+145.254506077" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.221335 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-twxgh" podStartSLOduration=124.221330482 podStartE2EDuration="2m4.221330482s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.216848209 +0000 UTC m=+145.250170778" watchObservedRunningTime="2026-02-19 05:27:29.221330482 +0000 UTC m=+145.254653041" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.272798 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xphkg" podStartSLOduration=123.272783091 podStartE2EDuration="2m3.272783091s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.245605807 +0000 UTC m=+145.278928376" watchObservedRunningTime="2026-02-19 05:27:29.272783091 +0000 UTC m=+145.306105660" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.283150 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:29 crc kubenswrapper[5012]: E0219 05:27:29.283435 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.783423672 +0000 UTC m=+145.816746241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.293326 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tv8j7" podStartSLOduration=123.293294982 podStartE2EDuration="2m3.293294982s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.271364882 +0000 UTC m=+145.304687451" watchObservedRunningTime="2026-02-19 05:27:29.293294982 +0000 UTC m=+145.326617551" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.296897 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh"] Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.308024 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dgcbg" podStartSLOduration=123.308012095 podStartE2EDuration="2m3.308012095s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.306472423 +0000 UTC m=+145.339794992" watchObservedRunningTime="2026-02-19 05:27:29.308012095 +0000 UTC m=+145.341334664" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.344863 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5"] Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.384817 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:29 crc kubenswrapper[5012]: E0219 05:27:29.385332 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.885286791 +0000 UTC m=+145.918609360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.470867 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw"] Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.489137 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:29 crc kubenswrapper[5012]: E0219 05:27:29.489730 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:29.98971071 +0000 UTC m=+146.023033279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.509877 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5lz5f"] Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.529201 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd"] Feb 19 05:27:29 crc kubenswrapper[5012]: W0219 05:27:29.548801 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2ef24f0_0d7d_4d25_a839_b650893a8332.slice/crio-4170f89dcf6400f27a8d30cf11a2759d84fa6b0636d9e001fc3de2c9e351fb8a WatchSource:0}: Error finding container 4170f89dcf6400f27a8d30cf11a2759d84fa6b0636d9e001fc3de2c9e351fb8a: Status 404 returned error can't find the container with id 4170f89dcf6400f27a8d30cf11a2759d84fa6b0636d9e001fc3de2c9e351fb8a Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.555216 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd"] Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.593924 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:29 crc kubenswrapper[5012]: E0219 05:27:29.594513 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:30.094497279 +0000 UTC m=+146.127819848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.604132 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6"] Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.652404 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.663084 5012 patch_prober.go:28] interesting pod/router-default-5444994796-xphkg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 05:27:29 crc kubenswrapper[5012]: [-]has-synced failed: reason withheld Feb 19 05:27:29 crc kubenswrapper[5012]: [+]process-running ok Feb 19 05:27:29 crc kubenswrapper[5012]: healthz check failed Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.663140 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xphkg" podUID="c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.701284 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:29 crc kubenswrapper[5012]: E0219 05:27:29.701672 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:30.201660303 +0000 UTC m=+146.234982872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.804847 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:29 crc kubenswrapper[5012]: E0219 05:27:29.805477 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:30.305461395 +0000 UTC m=+146.338783964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.830219 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-tjxj6"] Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.850694 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" event={"ID":"a3d6e827-2fd3-4026-8bbb-b6336cf7c020","Type":"ContainerStarted","Data":"7bd06768a70620d61b4c8bd3cc981fddb220ce5161cbcb2b453a85acd432af62"} Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.861564 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" event={"ID":"e69d69b3-8e9f-4413-93c1-3c1f77388221","Type":"ContainerStarted","Data":"93fb9b589d6289ffd4851f1e88b8f5cfe19b0d2d25f63362c467d31a22adff2e"} Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.873044 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" event={"ID":"9d9907b5-e862-4242-b233-ed39e5de515a","Type":"ContainerStarted","Data":"291503e9c76ce615eb7999a82601dfb35c2e6137b50e2c0b8c3822c0aa06afcc"} Feb 19 05:27:29 crc kubenswrapper[5012]: W0219 05:27:29.873596 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4edd2db_a884_46ac_9a12_0cd2a5daaeb5.slice/crio-f74174bc886f32fbb6835d34a2a8e317e36dd1c54827c3053e9a817ce011c1aa WatchSource:0}: Error finding container f74174bc886f32fbb6835d34a2a8e317e36dd1c54827c3053e9a817ce011c1aa: Status 404 returned error can't find the container with id f74174bc886f32fbb6835d34a2a8e317e36dd1c54827c3053e9a817ce011c1aa Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.891961 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" event={"ID":"6383e6d2-7e9e-4927-a55a-f574e48d316d","Type":"ContainerStarted","Data":"b5ba991ec28c36a1b0da26f02b14fe82ecdacedcdc13253933d0852244f806d5"} Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.894996 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" event={"ID":"c2ef24f0-0d7d-4d25-a839-b650893a8332","Type":"ContainerStarted","Data":"4170f89dcf6400f27a8d30cf11a2759d84fa6b0636d9e001fc3de2c9e351fb8a"} Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.906563 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:29 crc kubenswrapper[5012]: E0219 05:27:29.906960 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:30.406944643 +0000 UTC m=+146.440267212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.917673 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" event={"ID":"746299dc-637f-42a3-ad0d-0de202bae64e","Type":"ContainerStarted","Data":"714988db22596c1d65aaf308c4916997186183702fc2ddf6a89dc3763690e18d"} Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.942657 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" event={"ID":"053058a2-c542-41f4-b393-1be45501cfa9","Type":"ContainerStarted","Data":"fa048d2900fde8ad6ca662cd4311116c9cec4123b78efbc7808031f85e351485"} Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.943084 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" event={"ID":"053058a2-c542-41f4-b393-1be45501cfa9","Type":"ContainerStarted","Data":"247fdcb9e7d20812e4c8624cd9b32ab824c3b7117663dfe72f010e8d9a6c1a4e"} Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.969724 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j8fg8" podStartSLOduration=124.956289364 podStartE2EDuration="2m4.956289364s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.954102925 +0000 UTC m=+145.987425484" watchObservedRunningTime="2026-02-19 05:27:29.956289364 +0000 UTC m=+145.989611933" Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.982810 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" event={"ID":"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88","Type":"ContainerStarted","Data":"233d67638e79328975d3351376616c9acd42140bec2ca07eea26d5f35609f2f4"} Feb 19 05:27:29 crc kubenswrapper[5012]: I0219 05:27:29.984069 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v57rh" podStartSLOduration=123.984044214 podStartE2EDuration="2m3.984044214s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:29.984020264 +0000 UTC m=+146.017342833" watchObservedRunningTime="2026-02-19 05:27:29.984044214 +0000 UTC m=+146.017366773" Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.006398 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6mmvm"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.010099 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.011407 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:30.511376983 +0000 UTC m=+146.544699552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.033843 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gv2pd"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.042436 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" podStartSLOduration=124.042415683 podStartE2EDuration="2m4.042415683s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:30.033748535 +0000 UTC m=+146.067071104" watchObservedRunningTime="2026-02-19 05:27:30.042415683 +0000 UTC m=+146.075738252" Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.043273 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xfb4j"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.089155 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwd8z"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.099182 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.106122 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n8t75"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.112555 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.119970 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:30.619955225 +0000 UTC m=+146.653277794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.122253 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.129437 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.138350 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7q9qf"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.148922 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm"] Feb 19 05:27:30 crc kubenswrapper[5012]: W0219 05:27:30.150556 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdad60bd_8af5_439a_a62e_edf676281c47.slice/crio-238a1a6613944c23bcbc5511f8351d95f7ddcdcecbc40cdcd70078a275508a6b WatchSource:0}: Error finding container 238a1a6613944c23bcbc5511f8351d95f7ddcdcecbc40cdcd70078a275508a6b: Status 404 returned error can't find the container with id 238a1a6613944c23bcbc5511f8351d95f7ddcdcecbc40cdcd70078a275508a6b Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.204775 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.207275 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.218149 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.218655 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:30.718635177 +0000 UTC m=+146.751957746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.221660 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.223473 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.265893 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-x2l69"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.322030 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.334159 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:30.834120919 +0000 UTC m=+146.867443488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.371075 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gwx52"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.422761 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.431423 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.431475 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.437675 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf"] Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.437723 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6"] Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.445504 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:30.945471728 +0000 UTC m=+146.978794297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.546847 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.547378 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.047359987 +0000 UTC m=+147.080682556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.582495 5012 patch_prober.go:28] interesting pod/apiserver-76f77b778f-hjmb9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 19 05:27:30 crc kubenswrapper[5012]: [+]log ok Feb 19 05:27:30 crc kubenswrapper[5012]: [+]etcd ok Feb 19 05:27:30 crc kubenswrapper[5012]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 19 05:27:30 crc kubenswrapper[5012]: [+]poststarthook/generic-apiserver-start-informers ok Feb 19 05:27:30 crc kubenswrapper[5012]: [+]poststarthook/max-in-flight-filter ok Feb 19 05:27:30 crc kubenswrapper[5012]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 19 05:27:30 crc kubenswrapper[5012]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 19 05:27:30 crc kubenswrapper[5012]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 19 05:27:30 crc kubenswrapper[5012]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 19 05:27:30 crc kubenswrapper[5012]: [+]poststarthook/project.openshift.io-projectcache ok Feb 19 05:27:30 crc kubenswrapper[5012]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 19 05:27:30 crc kubenswrapper[5012]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Feb 19 05:27:30 crc kubenswrapper[5012]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 19 05:27:30 crc kubenswrapper[5012]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 19 05:27:30 crc kubenswrapper[5012]: livez check failed Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.582549 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" podUID="4888722d-d5dd-4748-ac7b-a1d11ba08e6e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.648171 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.649038 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.149011711 +0000 UTC m=+147.182334280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.649599 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.650368 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.150294326 +0000 UTC m=+147.183616925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.672610 5012 patch_prober.go:28] interesting pod/router-default-5444994796-xphkg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 05:27:30 crc kubenswrapper[5012]: [-]has-synced failed: reason withheld Feb 19 05:27:30 crc kubenswrapper[5012]: [+]process-running ok Feb 19 05:27:30 crc kubenswrapper[5012]: healthz check failed Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.672661 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xphkg" podUID="c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.750772 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.751311 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.251278121 +0000 UTC m=+147.284600690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.852416 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.853117 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.353090838 +0000 UTC m=+147.386413407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.953378 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.953505 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.453476896 +0000 UTC m=+147.486799465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.954075 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:30 crc kubenswrapper[5012]: E0219 05:27:30.954472 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.454457162 +0000 UTC m=+147.487779731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.985353 5012 patch_prober.go:28] interesting pod/console-operator-58897d9998-twxgh container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 05:27:30 crc kubenswrapper[5012]: I0219 05:27:30.985428 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-twxgh" podUID="47b7dc89-8538-41f1-b569-a2b6dcbf8f13" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.005552 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" event={"ID":"6383e6d2-7e9e-4927-a55a-f574e48d316d","Type":"ContainerStarted","Data":"934c363dcd8da69c6385acb864b1ee90cc3cf64cd81262fdf49825772174c8f0"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.008516 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" event={"ID":"a3d6e827-2fd3-4026-8bbb-b6336cf7c020","Type":"ContainerStarted","Data":"dd0768f51b314debb4667760108777dd51c2e98364a09dac6417b1954e7afb69"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.046768 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xg4d5" podStartSLOduration=125.046738949 podStartE2EDuration="2m5.046738949s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:31.034419842 +0000 UTC m=+147.067742411" watchObservedRunningTime="2026-02-19 05:27:31.046738949 +0000 UTC m=+147.080061518" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.056989 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:31 crc kubenswrapper[5012]: E0219 05:27:31.057520 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.557488733 +0000 UTC m=+147.590811302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.075207 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" event={"ID":"e69d69b3-8e9f-4413-93c1-3c1f77388221","Type":"ContainerStarted","Data":"568922a2fc5b3da6777ed652109652603b30e55c13b96602a9bba6fd75817c67"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.106865 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" event={"ID":"bdad60bd-8af5-439a-a62e-edf676281c47","Type":"ContainerStarted","Data":"f72550df5ab68a936ecdc6b080e0f399a05b553de8ce12467f05fe786041c8cc"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.106926 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" event={"ID":"bdad60bd-8af5-439a-a62e-edf676281c47","Type":"ContainerStarted","Data":"238a1a6613944c23bcbc5511f8351d95f7ddcdcecbc40cdcd70078a275508a6b"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.145968 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" event={"ID":"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205","Type":"ContainerStarted","Data":"140000039b488d6a74e6dca0a622136e9f5d95d4316284ef411cd86f7b4b5bdc"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.146010 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" event={"ID":"6ff5220b-0304-48dc-b2eb-e2bd2a2c8205","Type":"ContainerStarted","Data":"55e0d410865219aa80bb254f3000f0c06084b6058c11a4d834d22107a349d1de"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.159525 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:31 crc kubenswrapper[5012]: E0219 05:27:31.159838 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.659827325 +0000 UTC m=+147.693149894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.165678 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-gv2pd" podStartSLOduration=125.165660605 podStartE2EDuration="2m5.165660605s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:31.164554545 +0000 UTC m=+147.197877114" watchObservedRunningTime="2026-02-19 05:27:31.165660605 +0000 UTC m=+147.198983174" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.175368 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tjxj6" event={"ID":"c4edd2db-a884-46ac-9a12-0cd2a5daaeb5","Type":"ContainerStarted","Data":"31179a8dde6740a2622f1382e4cfb69846d1c6177a3354d5743bb90e841822f9"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.175411 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-tjxj6" event={"ID":"c4edd2db-a884-46ac-9a12-0cd2a5daaeb5","Type":"ContainerStarted","Data":"f74174bc886f32fbb6835d34a2a8e317e36dd1c54827c3053e9a817ce011c1aa"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.176045 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-tjxj6" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.186700 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" event={"ID":"73661022-0008-4452-b140-f0a75e4c40c7","Type":"ContainerStarted","Data":"596b1e354506a9e5a1e64ace0f45a0811e985252dae4a138f059023443d63e80"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.188037 5012 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjxj6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.188074 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tjxj6" podUID="c4edd2db-a884-46ac-9a12-0cd2a5daaeb5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.198898 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gwtrd" podStartSLOduration=125.198882765 podStartE2EDuration="2m5.198882765s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:31.194383271 +0000 UTC m=+147.227705840" watchObservedRunningTime="2026-02-19 05:27:31.198882765 +0000 UTC m=+147.232205334" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.226913 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-tjxj6" podStartSLOduration=125.226895012 podStartE2EDuration="2m5.226895012s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:31.226710627 +0000 UTC m=+147.260033196" watchObservedRunningTime="2026-02-19 05:27:31.226895012 +0000 UTC m=+147.260217571" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.260093 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:31 crc kubenswrapper[5012]: E0219 05:27:31.261146 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.761127889 +0000 UTC m=+147.794450458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.262942 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" event={"ID":"9102ddf1-e140-48e7-9ecd-14a4c007f5d5","Type":"ContainerStarted","Data":"e41ab33fc5b8652a7a2c955bb031c0953133b4a1470c9124e9653b6dbd68bbc9"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.274331 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n8t75" event={"ID":"59cc3a77-bf98-42ed-98d8-a921b7039c6f","Type":"ContainerStarted","Data":"6bc39bd96b4c359628dad75a9d900061768b52ad0369c3f6ed3a120400ee5c52"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.357638 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" event={"ID":"ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88","Type":"ContainerStarted","Data":"91ae4aa2e6aa7b6ec717573f0e3eaf4b00be341eb7b57b7438299cd791cd3906"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.358892 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.361611 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:31 crc kubenswrapper[5012]: E0219 05:27:31.361955 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.861944979 +0000 UTC m=+147.895267548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.364969 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx" event={"ID":"ab107439-3fd5-41e7-9d30-71962fc96028","Type":"ContainerStarted","Data":"da402da67ca4c5fc52c3d598b03036feb962066b3ef43bac03f54bd427d48c4b"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.365034 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx" event={"ID":"ab107439-3fd5-41e7-9d30-71962fc96028","Type":"ContainerStarted","Data":"d0f9af79507258c7fb52f8917ea0f2b463469fd198535aa579fda9f6d003c604"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.369222 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7q9qf" event={"ID":"f5a5c8b4-57c3-43fc-a404-2754d0e70c50","Type":"ContainerStarted","Data":"00493f76ed99bc7d84dfd7a4293c4e667388f5f521879ae230494e707881d54b"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.373571 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" event={"ID":"4f848507-d616-4d06-885f-d84210d9b4a0","Type":"ContainerStarted","Data":"4b19c01b7e4246e19fcfb5539ff1f59ef0ecd09dfb7d5005b2f7aee4820bffde"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.378101 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x2l69" event={"ID":"8a05e6ff-179f-4a04-9fc2-524e31980467","Type":"ContainerStarted","Data":"e12e02fdf3553333025d40efc1ffd6c4531ba3e283e1523b43458da16a159c64"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.381762 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" event={"ID":"ce585ab5-2554-4d20-8789-cf5bfa8e45a7","Type":"ContainerStarted","Data":"bdf60105a735686277da3c5b1467ac389878a76a65699e5227c68bdc76452b4e"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.388183 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" event={"ID":"46582f7f-c6b0-4ae3-9103-4a4754304438","Type":"ContainerStarted","Data":"9d6ad88222eb3dc7a89c4d09501dab7f14514064a3aea52304068199c3bce69f"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.390862 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" event={"ID":"562c18aa-5aed-4f1e-95f5-da1fe7c02523","Type":"ContainerStarted","Data":"48aada40317b892d9a223a57a3ac3503ec0ff8bc3ff5df783ac9de195fd3495f"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.390893 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" event={"ID":"562c18aa-5aed-4f1e-95f5-da1fe7c02523","Type":"ContainerStarted","Data":"a4304d16005995731fefdc081d0677adb43c535c36d93bdb10216b67e4aa8631"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.391950 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.400248 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" event={"ID":"1fe71123-0d33-41fa-b582-02d70177d0f0","Type":"ContainerStarted","Data":"2652496af01f16e7d01dd4644a2db7919909ac2c6ee7aabe5a04080706a0bb7d"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.401227 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.401862 5012 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kwd8z container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.401897 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" podUID="562c18aa-5aed-4f1e-95f5-da1fe7c02523" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.402979 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-7q9qf" podStartSLOduration=8.402960702 podStartE2EDuration="8.402960702s" podCreationTimestamp="2026-02-19 05:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:31.398276314 +0000 UTC m=+147.431598883" watchObservedRunningTime="2026-02-19 05:27:31.402960702 +0000 UTC m=+147.436283271" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.403647 5012 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-x52wm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.403669 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" podUID="1fe71123-0d33-41fa-b582-02d70177d0f0" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.419190 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" event={"ID":"c2ef24f0-0d7d-4d25-a839-b650893a8332","Type":"ContainerStarted","Data":"9dadbd16fc4257a74b3c758757f2183dec93fc827f2c3a52bc8ea622a33b7e8d"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.437163 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" event={"ID":"4087f246-2160-469e-8ad1-d88c147ff7c0","Type":"ContainerStarted","Data":"3e572b92845b1150adc255bd1a8efbf36b815ce8a7c965027181e1203955ca74"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.437207 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" event={"ID":"4087f246-2160-469e-8ad1-d88c147ff7c0","Type":"ContainerStarted","Data":"24d5a2afa116a58db7c6f6fc860e0f8debb92b44ff9a5a4acb4d222b5b89979a"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.442748 5012 generic.go:334] "Generic (PLEG): container finished" podID="9d9907b5-e862-4242-b233-ed39e5de515a" containerID="12562e089e25fe2983a61aba7d6057742e9f0092f47825604a4c818e9f02b0c9" exitCode=0 Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.442841 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" event={"ID":"9d9907b5-e862-4242-b233-ed39e5de515a","Type":"ContainerDied","Data":"12562e089e25fe2983a61aba7d6057742e9f0092f47825604a4c818e9f02b0c9"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.459337 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" podStartSLOduration=125.459322765 podStartE2EDuration="2m5.459322765s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:31.436531571 +0000 UTC m=+147.469854150" watchObservedRunningTime="2026-02-19 05:27:31.459322765 +0000 UTC m=+147.492645334" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.464709 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" event={"ID":"2876c7dd-5979-49eb-ab61-8ffce07376b2","Type":"ContainerStarted","Data":"f00efad93e397330dee961499b255aaf0237f272596ef2b7bd8e55ea2bcb1386"} Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.464754 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.465771 5012 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-jv9qx container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.465798 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" podUID="2876c7dd-5979-49eb-ab61-8ffce07376b2" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.468831 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:31 crc kubenswrapper[5012]: E0219 05:27:31.469510 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:31.969496444 +0000 UTC m=+148.002819013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.480110 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" podStartSLOduration=125.480089934 podStartE2EDuration="2m5.480089934s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:31.459660874 +0000 UTC m=+147.492983443" watchObservedRunningTime="2026-02-19 05:27:31.480089934 +0000 UTC m=+147.513412503" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.483488 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-87qqk" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.517920 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qg5kd" podStartSLOduration=125.517873078 podStartE2EDuration="2m5.517873078s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:31.482320965 +0000 UTC m=+147.515643534" watchObservedRunningTime="2026-02-19 05:27:31.517873078 +0000 UTC m=+147.551195647" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.555013 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" podStartSLOduration=125.554978724 podStartE2EDuration="2m5.554978724s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:31.542430001 +0000 UTC m=+147.575752560" watchObservedRunningTime="2026-02-19 05:27:31.554978724 +0000 UTC m=+147.588301293" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.601755 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:31 crc kubenswrapper[5012]: E0219 05:27:31.617832 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.117808684 +0000 UTC m=+148.151131253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.667584 5012 patch_prober.go:28] interesting pod/router-default-5444994796-xphkg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 05:27:31 crc kubenswrapper[5012]: [-]has-synced failed: reason withheld Feb 19 05:27:31 crc kubenswrapper[5012]: [+]process-running ok Feb 19 05:27:31 crc kubenswrapper[5012]: healthz check failed Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.667623 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xphkg" podUID="c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.710631 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:31 crc kubenswrapper[5012]: E0219 05:27:31.711013 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.210999996 +0000 UTC m=+148.244322565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.816119 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:31 crc kubenswrapper[5012]: E0219 05:27:31.816820 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.316805553 +0000 UTC m=+148.350128122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:31 crc kubenswrapper[5012]: I0219 05:27:31.917806 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:31 crc kubenswrapper[5012]: E0219 05:27:31.918120 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.418106566 +0000 UTC m=+148.451429135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.019186 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.019714 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.519684907 +0000 UTC m=+148.553007476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.121207 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.121435 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.621407192 +0000 UTC m=+148.654729761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.121601 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.121917 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.621904586 +0000 UTC m=+148.655227145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.222768 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.223013 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.722981093 +0000 UTC m=+148.756303662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.223093 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.223437 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.723421796 +0000 UTC m=+148.756744365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.324534 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.325451 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.825436239 +0000 UTC m=+148.858758808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.359614 5012 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-t22fw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:5443/healthz\": context deadline exceeded" start-of-body= Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.359672 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" podUID="ea74b8b2-4803-4ca0-ae7c-9d893bb8cf88" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.28:5443/healthz\": context deadline exceeded" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.426499 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.426902 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:32.926885146 +0000 UTC m=+148.960207715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.466323 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" event={"ID":"4f848507-d616-4d06-885f-d84210d9b4a0","Type":"ContainerStarted","Data":"a5392e720da0426e7ceef10d0cda2ed58aeb2d4a566e42290efa4b7559b16c98"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.467877 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" event={"ID":"ce585ab5-2554-4d20-8789-cf5bfa8e45a7","Type":"ContainerStarted","Data":"2dcd03507647b2936efc16e245313a460e479c8027de7859ce5d48daf431680d"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.468099 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.469375 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n8t75" event={"ID":"59cc3a77-bf98-42ed-98d8-a921b7039c6f","Type":"ContainerStarted","Data":"f991c6291fe14dcb45ceeb0ac927a3e121130030e26a5474b618370fe6e8d6e7"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.469679 5012 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6mmvm container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" start-of-body= Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.469728 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" podUID="ce585ab5-2554-4d20-8789-cf5bfa8e45a7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.470792 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" event={"ID":"1fe71123-0d33-41fa-b582-02d70177d0f0","Type":"ContainerStarted","Data":"f925c3970014c9c772e649ce17e14f8da66967853c70803bbd3a58c2cac82bfe"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.472822 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7q9qf" event={"ID":"f5a5c8b4-57c3-43fc-a404-2754d0e70c50","Type":"ContainerStarted","Data":"1a6cf9619c764cefc3b20be635a3b7a82399b2d323243e4f7a65726c22488bbe"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.474138 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" event={"ID":"2876c7dd-5979-49eb-ab61-8ffce07376b2","Type":"ContainerStarted","Data":"6436bc748624620ac0c420a56474e96725fe68bc22d131f413a9bd0bee35ce28"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.476249 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" event={"ID":"73661022-0008-4452-b140-f0a75e4c40c7","Type":"ContainerStarted","Data":"f89345770d722ee3c2c4b2cd055939acb47f86448dad7a7d606bcb930764d614"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.476283 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" event={"ID":"73661022-0008-4452-b140-f0a75e4c40c7","Type":"ContainerStarted","Data":"f077b5534c78cf0401c76e13b78eb731161a85181d9a3a47fd5c9ae9fbbf9043"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.478246 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx" event={"ID":"ab107439-3fd5-41e7-9d30-71962fc96028","Type":"ContainerStarted","Data":"c530ea4815d1d6599f495fc12cad697deeb748084c8735ecc301a973e1d0c08e"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.480056 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x2l69" event={"ID":"8a05e6ff-179f-4a04-9fc2-524e31980467","Type":"ContainerStarted","Data":"dc620a880d94d42fb50afd48bb887571bd10dd03b87bb53c851ffd0920ae97ca"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.480086 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x2l69" event={"ID":"8a05e6ff-179f-4a04-9fc2-524e31980467","Type":"ContainerStarted","Data":"e28687a9a53aa4c5c8e2212e5e4708b683f2f047dc1b968f77ee7fa7fff09c4c"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.480447 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.481636 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" event={"ID":"4087f246-2160-469e-8ad1-d88c147ff7c0","Type":"ContainerStarted","Data":"ef1e6cc79735672a629e72ac636446ea1c3d193a856d25644eeded63d0d801f5"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.483094 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" event={"ID":"a3d6e827-2fd3-4026-8bbb-b6336cf7c020","Type":"ContainerStarted","Data":"11ec2736e461ae7f894ad44d757cd4b1b03a577e15a93de342b21849df8ef89e"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.485060 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" event={"ID":"9d9907b5-e862-4242-b233-ed39e5de515a","Type":"ContainerStarted","Data":"ea1704874446ac427436ac2a83ffb54965d98ad3e0c5a5a91c666ddb9f68fff5"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.485087 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.487550 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" event={"ID":"46582f7f-c6b0-4ae3-9103-4a4754304438","Type":"ContainerStarted","Data":"6ecd18e5cbbb471f815af478d67f7066d4c1bb34788cd0f8db72ff1fe8b502b7"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.488808 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" event={"ID":"9102ddf1-e140-48e7-9ecd-14a4c007f5d5","Type":"ContainerStarted","Data":"cb27ffc8bea8d2f4936d5055096df47700e058e55b5c2982be2365f15b2c4e55"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.490835 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" event={"ID":"e69d69b3-8e9f-4413-93c1-3c1f77388221","Type":"ContainerStarted","Data":"1fda32d91de928b7ab6b4a50e19eb50b8b6a0c562b3d63ffd0717378c3f931b7"} Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.490862 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.492209 5012 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kwd8z container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.492245 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" podUID="562c18aa-5aed-4f1e-95f5-da1fe7c02523" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.492737 5012 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjxj6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.492761 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tjxj6" podUID="c4edd2db-a884-46ac-9a12-0cd2a5daaeb5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.497500 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-gwx52" podStartSLOduration=126.497480319 podStartE2EDuration="2m6.497480319s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.496825111 +0000 UTC m=+148.530147680" watchObservedRunningTime="2026-02-19 05:27:32.497480319 +0000 UTC m=+148.530802888" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.497790 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jv9qx" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.508583 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t22fw" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.527464 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.528979 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:33.028964311 +0000 UTC m=+149.062286880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.549137 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-x2l69" podStartSLOduration=9.549120813 podStartE2EDuration="9.549120813s" podCreationTimestamp="2026-02-19 05:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.541920766 +0000 UTC m=+148.575243325" watchObservedRunningTime="2026-02-19 05:27:32.549120813 +0000 UTC m=+148.582443382" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.554485 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52wm" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.565187 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" podStartSLOduration=126.565165642 podStartE2EDuration="2m6.565165642s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.564934976 +0000 UTC m=+148.598257535" watchObservedRunningTime="2026-02-19 05:27:32.565165642 +0000 UTC m=+148.598488211" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.583905 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-5lz5f" podStartSLOduration=126.583891325 podStartE2EDuration="2m6.583891325s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.580878912 +0000 UTC m=+148.614201481" watchObservedRunningTime="2026-02-19 05:27:32.583891325 +0000 UTC m=+148.617213894" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.584974 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.605272 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" podStartSLOduration=126.60525549 podStartE2EDuration="2m6.60525549s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.603131352 +0000 UTC m=+148.636453921" watchObservedRunningTime="2026-02-19 05:27:32.60525549 +0000 UTC m=+148.638578059" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.629239 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.630213 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.630805 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:33.130772268 +0000 UTC m=+149.164094837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.633032 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" podStartSLOduration=127.6330211 podStartE2EDuration="2m7.6330211s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.630701686 +0000 UTC m=+148.664024255" watchObservedRunningTime="2026-02-19 05:27:32.6330211 +0000 UTC m=+148.666343669" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.634117 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.657643 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-xfb4j" podStartSLOduration=126.657624504 podStartE2EDuration="2m6.657624504s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.652235966 +0000 UTC m=+148.685558535" watchObservedRunningTime="2026-02-19 05:27:32.657624504 +0000 UTC m=+148.690947073" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.658282 5012 patch_prober.go:28] interesting pod/router-default-5444994796-xphkg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 05:27:32 crc kubenswrapper[5012]: [-]has-synced failed: reason withheld Feb 19 05:27:32 crc kubenswrapper[5012]: [+]process-running ok Feb 19 05:27:32 crc kubenswrapper[5012]: healthz check failed Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.658404 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xphkg" podUID="c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.701498 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" podStartSLOduration=127.701478804 podStartE2EDuration="2m7.701478804s" podCreationTimestamp="2026-02-19 05:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.700706133 +0000 UTC m=+148.734028702" watchObservedRunningTime="2026-02-19 05:27:32.701478804 +0000 UTC m=+148.734801373" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.734058 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.734565 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.734607 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.734672 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:33.234638672 +0000 UTC m=+149.267961241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.734702 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.752108 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.752200 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.767970 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.770904 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mbxqf" podStartSLOduration=126.770881474 podStartE2EDuration="2m6.770881474s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.768271123 +0000 UTC m=+148.801593692" watchObservedRunningTime="2026-02-19 05:27:32.770881474 +0000 UTC m=+148.804204043" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.836889 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.837221 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:33.337210831 +0000 UTC m=+149.370533400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.861532 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rx82w" podStartSLOduration=126.861509246 podStartE2EDuration="2m6.861509246s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.816943636 +0000 UTC m=+148.850266205" watchObservedRunningTime="2026-02-19 05:27:32.861509246 +0000 UTC m=+148.894831815" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.923175 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sppcx" podStartSLOduration=126.923161384 podStartE2EDuration="2m6.923161384s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:32.859880651 +0000 UTC m=+148.893203220" watchObservedRunningTime="2026-02-19 05:27:32.923161384 +0000 UTC m=+148.956483953" Feb 19 05:27:32 crc kubenswrapper[5012]: I0219 05:27:32.937855 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:32 crc kubenswrapper[5012]: E0219 05:27:32.938237 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:33.438220836 +0000 UTC m=+149.471543405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.018983 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.035960 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.039368 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.039912 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:33.53989421 +0000 UTC m=+149.573216769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.045668 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.140226 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.140914 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:33.640899335 +0000 UTC m=+149.674221904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.242329 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.242618 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:33.74260484 +0000 UTC m=+149.775927409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.343380 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.343974 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:33.843950704 +0000 UTC m=+149.877273273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.444982 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.445616 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:33.945603278 +0000 UTC m=+149.978925847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.511409 5012 generic.go:334] "Generic (PLEG): container finished" podID="46582f7f-c6b0-4ae3-9103-4a4754304438" containerID="6ecd18e5cbbb471f815af478d67f7066d4c1bb34788cd0f8db72ff1fe8b502b7" exitCode=0 Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.511487 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" event={"ID":"46582f7f-c6b0-4ae3-9103-4a4754304438","Type":"ContainerDied","Data":"6ecd18e5cbbb471f815af478d67f7066d4c1bb34788cd0f8db72ff1fe8b502b7"} Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.515524 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n8t75" event={"ID":"59cc3a77-bf98-42ed-98d8-a921b7039c6f","Type":"ContainerStarted","Data":"74486fcf2869c2781e719093ac09c51de06698182717f4dc05c4a67d46096f0a"} Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.520592 5012 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjxj6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.520647 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tjxj6" podUID="c4edd2db-a884-46ac-9a12-0cd2a5daaeb5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.546347 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.546635 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:34.046612083 +0000 UTC m=+150.079934652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.546946 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.549160 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:34.049146923 +0000 UTC m=+150.082469492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.617390 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.648647 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.651424 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:34.151395622 +0000 UTC m=+150.184718191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.668746 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.680195 5012 patch_prober.go:28] interesting pod/router-default-5444994796-xphkg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 05:27:33 crc kubenswrapper[5012]: [-]has-synced failed: reason withheld Feb 19 05:27:33 crc kubenswrapper[5012]: [+]process-running ok Feb 19 05:27:33 crc kubenswrapper[5012]: healthz check failed Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.680244 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xphkg" podUID="c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 05:27:33 crc kubenswrapper[5012]: W0219 05:27:33.719148 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-5fc5833a15ed29eda06a6213ccb4c8d756da271f16af8ca8f0ecd237c9dd4380 WatchSource:0}: Error finding container 5fc5833a15ed29eda06a6213ccb4c8d756da271f16af8ca8f0ecd237c9dd4380: Status 404 returned error can't find the container with id 5fc5833a15ed29eda06a6213ccb4c8d756da271f16af8ca8f0ecd237c9dd4380 Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.737090 5012 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.751546 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.751905 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:34.251892994 +0000 UTC m=+150.285215563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.854546 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.854686 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 05:27:34.354663157 +0000 UTC m=+150.387985726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.857487 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:33 crc kubenswrapper[5012]: E0219 05:27:33.857778 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 05:27:34.357767022 +0000 UTC m=+150.391089591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ljzsp" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.911798 5012 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-19T05:27:33.737114199Z","Handler":null,"Name":""} Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.916230 5012 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.916279 5012 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.958493 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 05:27:33 crc kubenswrapper[5012]: W0219 05:27:33.971474 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-9cba4c391c4a9d0349e171123ff6552eb4c54cd23b63e702d08938644e9f84c8 WatchSource:0}: Error finding container 9cba4c391c4a9d0349e171123ff6552eb4c54cd23b63e702d08938644e9f84c8: Status 404 returned error can't find the container with id 9cba4c391c4a9d0349e171123ff6552eb4c54cd23b63e702d08938644e9f84c8 Feb 19 05:27:33 crc kubenswrapper[5012]: I0219 05:27:33.982067 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.032378 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4xvs8"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.033590 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.036142 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.044844 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4xvs8"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.060542 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.061076 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-utilities\") pod \"certified-operators-4xvs8\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.061114 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-catalog-content\") pod \"certified-operators-4xvs8\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.061145 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plzvr\" (UniqueName: \"kubernetes.io/projected/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-kube-api-access-plzvr\") pod \"certified-operators-4xvs8\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.065694 5012 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.065740 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.088008 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ljzsp\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.161821 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-utilities\") pod \"certified-operators-4xvs8\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.161979 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-catalog-content\") pod \"certified-operators-4xvs8\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.162063 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plzvr\" (UniqueName: \"kubernetes.io/projected/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-kube-api-access-plzvr\") pod \"certified-operators-4xvs8\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.162260 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-utilities\") pod \"certified-operators-4xvs8\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.162329 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-catalog-content\") pod \"certified-operators-4xvs8\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.183409 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plzvr\" (UniqueName: \"kubernetes.io/projected/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-kube-api-access-plzvr\") pod \"certified-operators-4xvs8\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.215417 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xrjxk"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.216290 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.218048 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.224113 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xrjxk"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.263058 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-catalog-content\") pod \"community-operators-xrjxk\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.263120 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-utilities\") pod \"community-operators-xrjxk\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.263144 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtwg8\" (UniqueName: \"kubernetes.io/projected/7b9a1165-24e0-4062-b805-0f8262822507-kube-api-access-gtwg8\") pod \"community-operators-xrjxk\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.356327 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.364201 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-catalog-content\") pod \"community-operators-xrjxk\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.364260 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-utilities\") pod \"community-operators-xrjxk\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.364288 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtwg8\" (UniqueName: \"kubernetes.io/projected/7b9a1165-24e0-4062-b805-0f8262822507-kube-api-access-gtwg8\") pod \"community-operators-xrjxk\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.364707 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-catalog-content\") pod \"community-operators-xrjxk\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.364762 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-utilities\") pod \"community-operators-xrjxk\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.383872 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.385151 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtwg8\" (UniqueName: \"kubernetes.io/projected/7b9a1165-24e0-4062-b805-0f8262822507-kube-api-access-gtwg8\") pod \"community-operators-xrjxk\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.419535 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5q7vk"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.420438 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.435524 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5q7vk"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.467425 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-utilities\") pod \"certified-operators-5q7vk\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.467491 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-catalog-content\") pod \"certified-operators-5q7vk\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.467555 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrsrw\" (UniqueName: \"kubernetes.io/projected/6173dc70-80d4-4f9f-9129-898b2dc38692-kube-api-access-nrsrw\") pod \"certified-operators-5q7vk\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.528860 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.531332 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n8t75" event={"ID":"59cc3a77-bf98-42ed-98d8-a921b7039c6f","Type":"ContainerStarted","Data":"b1368d1f90632c6d4bbd85063daf2f132f53457ed9152451edf650ce584a60fd"} Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.531364 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n8t75" event={"ID":"59cc3a77-bf98-42ed-98d8-a921b7039c6f","Type":"ContainerStarted","Data":"5127df93edafa6ae4d58b651e2b89951c7923dd1043c70e657e4434b71b8ad5c"} Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.536234 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ffad8409b5f4d0899c2106106ac6efea5f1f05b5a28f347c9e9f64b7d2f2fac3"} Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.536278 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5fc5833a15ed29eda06a6213ccb4c8d756da271f16af8ca8f0ecd237c9dd4380"} Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.540589 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d136e23e49a6d5e9dcc1ae868f447b7d3648b735d3c6411e7ab87329a144b6f1"} Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.540632 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9cba4c391c4a9d0349e171123ff6552eb4c54cd23b63e702d08938644e9f84c8"} Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.541188 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.549967 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"45ee95c9692a3773b380da397110c1fb5c682017c21c1f94d9d0e2e49866301d"} Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.550004 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a14063f7e3c2fbcd2e9f3960b6f031e379fef81577c20ed6fa027502cbf975a1"} Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.552066 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-n8t75" podStartSLOduration=11.55205007 podStartE2EDuration="11.55205007s" podCreationTimestamp="2026-02-19 05:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:34.550674042 +0000 UTC m=+150.583996611" watchObservedRunningTime="2026-02-19 05:27:34.55205007 +0000 UTC m=+150.585372639" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.569215 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-utilities\") pod \"certified-operators-5q7vk\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.569329 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-catalog-content\") pod \"certified-operators-5q7vk\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.569539 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrsrw\" (UniqueName: \"kubernetes.io/projected/6173dc70-80d4-4f9f-9129-898b2dc38692-kube-api-access-nrsrw\") pod \"certified-operators-5q7vk\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.572516 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-catalog-content\") pod \"certified-operators-5q7vk\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.576942 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-utilities\") pod \"certified-operators-5q7vk\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.602905 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrsrw\" (UniqueName: \"kubernetes.io/projected/6173dc70-80d4-4f9f-9129-898b2dc38692-kube-api-access-nrsrw\") pod \"certified-operators-5q7vk\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.651375 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-br86z"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.653762 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.658667 5012 patch_prober.go:28] interesting pod/router-default-5444994796-xphkg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 05:27:34 crc kubenswrapper[5012]: [-]has-synced failed: reason withheld Feb 19 05:27:34 crc kubenswrapper[5012]: [+]process-running ok Feb 19 05:27:34 crc kubenswrapper[5012]: healthz check failed Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.658766 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xphkg" podUID="c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.672282 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-br86z"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.672785 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-catalog-content\") pod \"community-operators-br86z\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.672827 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-utilities\") pod \"community-operators-br86z\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.672894 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzlc7\" (UniqueName: \"kubernetes.io/projected/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-kube-api-access-tzlc7\") pod \"community-operators-br86z\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.730537 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.736913 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ljzsp"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.774084 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzlc7\" (UniqueName: \"kubernetes.io/projected/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-kube-api-access-tzlc7\") pod \"community-operators-br86z\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.774148 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-catalog-content\") pod \"community-operators-br86z\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.774191 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-utilities\") pod \"community-operators-br86z\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.774568 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-utilities\") pod \"community-operators-br86z\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.775022 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-catalog-content\") pod \"community-operators-br86z\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.778946 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.794395 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4xvs8"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.800909 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzlc7\" (UniqueName: \"kubernetes.io/projected/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-kube-api-access-tzlc7\") pod \"community-operators-br86z\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:34 crc kubenswrapper[5012]: W0219 05:27:34.816638 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7ce4c2b_d3b7_4881_91fe_49f7103f12b9.slice/crio-b8c85544c6a863422777f31be4cc9ef9cf579d3d709dec29ffff9c467cf857f1 WatchSource:0}: Error finding container b8c85544c6a863422777f31be4cc9ef9cf579d3d709dec29ffff9c467cf857f1: Status 404 returned error can't find the container with id b8c85544c6a863422777f31be4cc9ef9cf579d3d709dec29ffff9c467cf857f1 Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.854917 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.861533 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xrjxk"] Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.875719 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46582f7f-c6b0-4ae3-9103-4a4754304438-secret-volume\") pod \"46582f7f-c6b0-4ae3-9103-4a4754304438\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.875813 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c24k9\" (UniqueName: \"kubernetes.io/projected/46582f7f-c6b0-4ae3-9103-4a4754304438-kube-api-access-c24k9\") pod \"46582f7f-c6b0-4ae3-9103-4a4754304438\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.875881 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46582f7f-c6b0-4ae3-9103-4a4754304438-config-volume\") pod \"46582f7f-c6b0-4ae3-9103-4a4754304438\" (UID: \"46582f7f-c6b0-4ae3-9103-4a4754304438\") " Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.876893 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46582f7f-c6b0-4ae3-9103-4a4754304438-config-volume" (OuterVolumeSpecName: "config-volume") pod "46582f7f-c6b0-4ae3-9103-4a4754304438" (UID: "46582f7f-c6b0-4ae3-9103-4a4754304438"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.882320 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46582f7f-c6b0-4ae3-9103-4a4754304438-kube-api-access-c24k9" (OuterVolumeSpecName: "kube-api-access-c24k9") pod "46582f7f-c6b0-4ae3-9103-4a4754304438" (UID: "46582f7f-c6b0-4ae3-9103-4a4754304438"). InnerVolumeSpecName "kube-api-access-c24k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.891579 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46582f7f-c6b0-4ae3-9103-4a4754304438-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "46582f7f-c6b0-4ae3-9103-4a4754304438" (UID: "46582f7f-c6b0-4ae3-9103-4a4754304438"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:27:34 crc kubenswrapper[5012]: W0219 05:27:34.915172 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b9a1165_24e0_4062_b805_0f8262822507.slice/crio-1dbae8515e388d77b201dd3b6779da7c54d4915cbd633620f81733f1a3b7142f WatchSource:0}: Error finding container 1dbae8515e388d77b201dd3b6779da7c54d4915cbd633620f81733f1a3b7142f: Status 404 returned error can't find the container with id 1dbae8515e388d77b201dd3b6779da7c54d4915cbd633620f81733f1a3b7142f Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.979685 5012 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46582f7f-c6b0-4ae3-9103-4a4754304438-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.980067 5012 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46582f7f-c6b0-4ae3-9103-4a4754304438-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.980077 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c24k9\" (UniqueName: \"kubernetes.io/projected/46582f7f-c6b0-4ae3-9103-4a4754304438-kube-api-access-c24k9\") on node \"crc\" DevicePath \"\"" Feb 19 05:27:34 crc kubenswrapper[5012]: I0219 05:27:34.980707 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-br86z" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.040350 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5q7vk"] Feb 19 05:27:35 crc kubenswrapper[5012]: W0219 05:27:35.053633 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6173dc70_80d4_4f9f_9129_898b2dc38692.slice/crio-51fb0c10b65b4e5eeccf825cbe8bef0aec67c350bcfafc478899d702eea9c2e4 WatchSource:0}: Error finding container 51fb0c10b65b4e5eeccf825cbe8bef0aec67c350bcfafc478899d702eea9c2e4: Status 404 returned error can't find the container with id 51fb0c10b65b4e5eeccf825cbe8bef0aec67c350bcfafc478899d702eea9c2e4 Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.080005 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 05:27:35 crc kubenswrapper[5012]: E0219 05:27:35.080211 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46582f7f-c6b0-4ae3-9103-4a4754304438" containerName="collect-profiles" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.080227 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="46582f7f-c6b0-4ae3-9103-4a4754304438" containerName="collect-profiles" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.080354 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="46582f7f-c6b0-4ae3-9103-4a4754304438" containerName="collect-profiles" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.080745 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.084432 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.084614 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.086976 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.184164 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6954f621-15eb-4515-8855-5bf05a7119c5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6954f621-15eb-4515-8855-5bf05a7119c5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.184250 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6954f621-15eb-4515-8855-5bf05a7119c5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6954f621-15eb-4515-8855-5bf05a7119c5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.232980 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-br86z"] Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.287255 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6954f621-15eb-4515-8855-5bf05a7119c5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6954f621-15eb-4515-8855-5bf05a7119c5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.287472 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6954f621-15eb-4515-8855-5bf05a7119c5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6954f621-15eb-4515-8855-5bf05a7119c5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.287536 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6954f621-15eb-4515-8855-5bf05a7119c5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6954f621-15eb-4515-8855-5bf05a7119c5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.329094 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6954f621-15eb-4515-8855-5bf05a7119c5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6954f621-15eb-4515-8855-5bf05a7119c5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.405713 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.411646 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-hjmb9" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.548560 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.558728 5012 generic.go:334] "Generic (PLEG): container finished" podID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerID="06936ac625543a23fe6a94c680d400a453b3652063590fadf1140acbd164e331" exitCode=0 Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.558948 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xvs8" event={"ID":"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9","Type":"ContainerDied","Data":"06936ac625543a23fe6a94c680d400a453b3652063590fadf1140acbd164e331"} Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.558995 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xvs8" event={"ID":"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9","Type":"ContainerStarted","Data":"b8c85544c6a863422777f31be4cc9ef9cf579d3d709dec29ffff9c467cf857f1"} Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.562013 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-br86z" event={"ID":"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc","Type":"ContainerStarted","Data":"be080096b804213f30565dd54118337146dcc411c16ff0c8a6962f9fd3f03e3a"} Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.563881 5012 generic.go:334] "Generic (PLEG): container finished" podID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerID="0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc" exitCode=0 Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.563993 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q7vk" event={"ID":"6173dc70-80d4-4f9f-9129-898b2dc38692","Type":"ContainerDied","Data":"0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc"} Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.564061 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q7vk" event={"ID":"6173dc70-80d4-4f9f-9129-898b2dc38692","Type":"ContainerStarted","Data":"51fb0c10b65b4e5eeccf825cbe8bef0aec67c350bcfafc478899d702eea9c2e4"} Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.567360 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" event={"ID":"46582f7f-c6b0-4ae3-9103-4a4754304438","Type":"ContainerDied","Data":"9d6ad88222eb3dc7a89c4d09501dab7f14514064a3aea52304068199c3bce69f"} Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.567464 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d6ad88222eb3dc7a89c4d09501dab7f14514064a3aea52304068199c3bce69f" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.567557 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.569894 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.573758 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" event={"ID":"70e7a5c6-0abf-4c78-8087-958a19264b49","Type":"ContainerStarted","Data":"83d6198005201c652f989f86934dfd0087e9ca81b54e4a24ea15985ceb37c2cd"} Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.573810 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" event={"ID":"70e7a5c6-0abf-4c78-8087-958a19264b49","Type":"ContainerStarted","Data":"b4a4a4ebd6fc7c45c5fc88ca24394f42a5591b27d7679378f83e52a1da7bb083"} Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.574649 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.585584 5012 generic.go:334] "Generic (PLEG): container finished" podID="7b9a1165-24e0-4062-b805-0f8262822507" containerID="0fe51da344cbaacf6697c74dcff49e7182b9df6468c8ccbfb60f3cd9e38eda3d" exitCode=0 Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.586224 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrjxk" event={"ID":"7b9a1165-24e0-4062-b805-0f8262822507","Type":"ContainerDied","Data":"0fe51da344cbaacf6697c74dcff49e7182b9df6468c8ccbfb60f3cd9e38eda3d"} Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.586350 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrjxk" event={"ID":"7b9a1165-24e0-4062-b805-0f8262822507","Type":"ContainerStarted","Data":"1dbae8515e388d77b201dd3b6779da7c54d4915cbd633620f81733f1a3b7142f"} Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.606607 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" podStartSLOduration=129.606585752 podStartE2EDuration="2m9.606585752s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:35.606366086 +0000 UTC m=+151.639688655" watchObservedRunningTime="2026-02-19 05:27:35.606585752 +0000 UTC m=+151.639908321" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.654830 5012 patch_prober.go:28] interesting pod/router-default-5444994796-xphkg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 05:27:35 crc kubenswrapper[5012]: [-]has-synced failed: reason withheld Feb 19 05:27:35 crc kubenswrapper[5012]: [+]process-running ok Feb 19 05:27:35 crc kubenswrapper[5012]: healthz check failed Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.654939 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xphkg" podUID="c75dab1e-8eb0-42e5-bc33-f0bf1ebb3dd8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.876367 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-9kvdd" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.925510 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.973045 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.974481 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.977804 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.979945 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 05:27:35 crc kubenswrapper[5012]: I0219 05:27:35.982608 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.000790 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"13070b31-1da3-4cbd-8281-072d0ab1a3dd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.000879 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"13070b31-1da3-4cbd-8281-072d0ab1a3dd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.025001 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-29nf4"] Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.026390 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.028095 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.036602 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-29nf4"] Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.102416 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"13070b31-1da3-4cbd-8281-072d0ab1a3dd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.102506 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-utilities\") pod \"redhat-marketplace-29nf4\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.102534 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"13070b31-1da3-4cbd-8281-072d0ab1a3dd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.102555 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-catalog-content\") pod \"redhat-marketplace-29nf4\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.102580 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz86t\" (UniqueName: \"kubernetes.io/projected/185ea561-a45e-49e1-a46b-f9bf9f6d2527-kube-api-access-fz86t\") pod \"redhat-marketplace-29nf4\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.102625 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"13070b31-1da3-4cbd-8281-072d0ab1a3dd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.121810 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"13070b31-1da3-4cbd-8281-072d0ab1a3dd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.207965 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-catalog-content\") pod \"redhat-marketplace-29nf4\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.208397 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz86t\" (UniqueName: \"kubernetes.io/projected/185ea561-a45e-49e1-a46b-f9bf9f6d2527-kube-api-access-fz86t\") pod \"redhat-marketplace-29nf4\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.208518 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-utilities\") pod \"redhat-marketplace-29nf4\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.209011 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-utilities\") pod \"redhat-marketplace-29nf4\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.209287 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-catalog-content\") pod \"redhat-marketplace-29nf4\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.228913 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz86t\" (UniqueName: \"kubernetes.io/projected/185ea561-a45e-49e1-a46b-f9bf9f6d2527-kube-api-access-fz86t\") pod \"redhat-marketplace-29nf4\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.291270 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.335183 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.335244 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.338573 5012 patch_prober.go:28] interesting pod/console-f9d7485db-mlxbg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.338622 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-mlxbg" podUID="5ff8f20f-5302-4b7a-826c-5d557c65c0f3" containerName="console" probeResult="failure" output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.341892 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.420210 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x576d"] Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.421623 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.423751 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x576d"] Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.516968 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g9fc\" (UniqueName: \"kubernetes.io/projected/2269b2c9-4876-43e3-85ce-9650ffec804f-kube-api-access-4g9fc\") pod \"redhat-marketplace-x576d\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.517427 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-utilities\") pod \"redhat-marketplace-x576d\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.517459 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-catalog-content\") pod \"redhat-marketplace-x576d\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.566121 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 05:27:36 crc kubenswrapper[5012]: W0219 05:27:36.609099 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod13070b31_1da3_4cbd_8281_072d0ab1a3dd.slice/crio-87ab6eb483767ec772ec703bf757d37c8c24a75c4922e31a84e430bedd92a159 WatchSource:0}: Error finding container 87ab6eb483767ec772ec703bf757d37c8c24a75c4922e31a84e430bedd92a159: Status 404 returned error can't find the container with id 87ab6eb483767ec772ec703bf757d37c8c24a75c4922e31a84e430bedd92a159 Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.619997 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-catalog-content\") pod \"redhat-marketplace-x576d\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.620046 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g9fc\" (UniqueName: \"kubernetes.io/projected/2269b2c9-4876-43e3-85ce-9650ffec804f-kube-api-access-4g9fc\") pod \"redhat-marketplace-x576d\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.620105 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-utilities\") pod \"redhat-marketplace-x576d\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.620483 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-utilities\") pod \"redhat-marketplace-x576d\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.620770 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-catalog-content\") pod \"redhat-marketplace-x576d\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.622124 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-twxgh" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.642531 5012 generic.go:334] "Generic (PLEG): container finished" podID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerID="e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56" exitCode=0 Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.642616 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-br86z" event={"ID":"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc","Type":"ContainerDied","Data":"e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56"} Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.657639 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g9fc\" (UniqueName: \"kubernetes.io/projected/2269b2c9-4876-43e3-85ce-9650ffec804f-kube-api-access-4g9fc\") pod \"redhat-marketplace-x576d\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.649518 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6954f621-15eb-4515-8855-5bf05a7119c5","Type":"ContainerStarted","Data":"8acea0f167402859981dc839cf3505db2ef197f0b780f6627e5c2b682ff17782"} Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.659665 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6954f621-15eb-4515-8855-5bf05a7119c5","Type":"ContainerStarted","Data":"1ccd16d1972d25b1763f5be3a7ec9e3f85e4b8a335594d9e379233169b3aff08"} Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.659698 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.659767 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.664249 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xphkg" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.689699 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.689676306 podStartE2EDuration="1.689676306s" podCreationTimestamp="2026-02-19 05:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:36.687501097 +0000 UTC m=+152.720823666" watchObservedRunningTime="2026-02-19 05:27:36.689676306 +0000 UTC m=+152.722998875" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.776795 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.859811 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-29nf4"] Feb 19 05:27:36 crc kubenswrapper[5012]: W0219 05:27:36.873467 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod185ea561_a45e_49e1_a46b_f9bf9f6d2527.slice/crio-d6248eb1f07ab21d429bccf4d50cb020bfc4631adebda71b1fd6e99e737ec5c4 WatchSource:0}: Error finding container d6248eb1f07ab21d429bccf4d50cb020bfc4631adebda71b1fd6e99e737ec5c4: Status 404 returned error can't find the container with id d6248eb1f07ab21d429bccf4d50cb020bfc4631adebda71b1fd6e99e737ec5c4 Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.938869 5012 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjxj6 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.938918 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-tjxj6" podUID="c4edd2db-a884-46ac-9a12-0cd2a5daaeb5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.939052 5012 patch_prober.go:28] interesting pod/downloads-7954f5f757-tjxj6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Feb 19 05:27:36 crc kubenswrapper[5012]: I0219 05:27:36.941384 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-tjxj6" podUID="c4edd2db-a884-46ac-9a12-0cd2a5daaeb5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.090329 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x576d"] Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.224269 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rprhz"] Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.225263 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.231667 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.291265 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rprhz"] Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.338293 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-utilities\") pod \"redhat-operators-rprhz\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.338343 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-catalog-content\") pod \"redhat-operators-rprhz\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.338364 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr49f\" (UniqueName: \"kubernetes.io/projected/e45c788c-c8a0-4563-8d05-71915e390342-kube-api-access-pr49f\") pod \"redhat-operators-rprhz\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.440890 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-utilities\") pod \"redhat-operators-rprhz\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.440935 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-catalog-content\") pod \"redhat-operators-rprhz\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.440957 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr49f\" (UniqueName: \"kubernetes.io/projected/e45c788c-c8a0-4563-8d05-71915e390342-kube-api-access-pr49f\") pod \"redhat-operators-rprhz\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.442058 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-catalog-content\") pod \"redhat-operators-rprhz\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.442635 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-utilities\") pod \"redhat-operators-rprhz\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.467129 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr49f\" (UniqueName: \"kubernetes.io/projected/e45c788c-c8a0-4563-8d05-71915e390342-kube-api-access-pr49f\") pod \"redhat-operators-rprhz\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.559193 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.625342 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-48wp9"] Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.626284 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.644877 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-utilities\") pod \"redhat-operators-48wp9\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.644920 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-catalog-content\") pod \"redhat-operators-48wp9\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.644939 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlfxl\" (UniqueName: \"kubernetes.io/projected/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-kube-api-access-rlfxl\") pod \"redhat-operators-48wp9\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.672477 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-48wp9"] Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.680776 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"13070b31-1da3-4cbd-8281-072d0ab1a3dd","Type":"ContainerStarted","Data":"c1adebc5f1403256c42763a66b6e381a177799c4b736890ae0e7d77d62d55407"} Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.680833 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"13070b31-1da3-4cbd-8281-072d0ab1a3dd","Type":"ContainerStarted","Data":"87ab6eb483767ec772ec703bf757d37c8c24a75c4922e31a84e430bedd92a159"} Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.690736 5012 generic.go:334] "Generic (PLEG): container finished" podID="6954f621-15eb-4515-8855-5bf05a7119c5" containerID="8acea0f167402859981dc839cf3505db2ef197f0b780f6627e5c2b682ff17782" exitCode=0 Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.690911 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6954f621-15eb-4515-8855-5bf05a7119c5","Type":"ContainerDied","Data":"8acea0f167402859981dc839cf3505db2ef197f0b780f6627e5c2b682ff17782"} Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.710647 5012 generic.go:334] "Generic (PLEG): container finished" podID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerID="dc1a50c23707e41d34121953c7a07c7a6d9a618fec62090df956fa84f7fc89cb" exitCode=0 Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.710830 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29nf4" event={"ID":"185ea561-a45e-49e1-a46b-f9bf9f6d2527","Type":"ContainerDied","Data":"dc1a50c23707e41d34121953c7a07c7a6d9a618fec62090df956fa84f7fc89cb"} Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.710867 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29nf4" event={"ID":"185ea561-a45e-49e1-a46b-f9bf9f6d2527","Type":"ContainerStarted","Data":"d6248eb1f07ab21d429bccf4d50cb020bfc4631adebda71b1fd6e99e737ec5c4"} Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.712452 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.712430678 podStartE2EDuration="2.712430678s" podCreationTimestamp="2026-02-19 05:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:27:37.708664225 +0000 UTC m=+153.741986794" watchObservedRunningTime="2026-02-19 05:27:37.712430678 +0000 UTC m=+153.745753247" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.719376 5012 generic.go:334] "Generic (PLEG): container finished" podID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerID="425cdf1067d5b62c628d2a89d10d8e953e7e1cf4ee46294d6c0c129fa2655d83" exitCode=0 Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.719578 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x576d" event={"ID":"2269b2c9-4876-43e3-85ce-9650ffec804f","Type":"ContainerDied","Data":"425cdf1067d5b62c628d2a89d10d8e953e7e1cf4ee46294d6c0c129fa2655d83"} Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.719632 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x576d" event={"ID":"2269b2c9-4876-43e3-85ce-9650ffec804f","Type":"ContainerStarted","Data":"e1cb3c13c7905eb67fe5b6fee6bfb21b5e93340cd8fbda0eba5b4e60709ae667"} Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.746543 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-utilities\") pod \"redhat-operators-48wp9\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.746594 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-catalog-content\") pod \"redhat-operators-48wp9\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.746643 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlfxl\" (UniqueName: \"kubernetes.io/projected/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-kube-api-access-rlfxl\") pod \"redhat-operators-48wp9\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.747711 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-utilities\") pod \"redhat-operators-48wp9\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.747823 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-catalog-content\") pod \"redhat-operators-48wp9\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.779337 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlfxl\" (UniqueName: \"kubernetes.io/projected/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-kube-api-access-rlfxl\") pod \"redhat-operators-48wp9\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:37 crc kubenswrapper[5012]: I0219 05:27:37.980351 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:27:38 crc kubenswrapper[5012]: I0219 05:27:38.175385 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rprhz"] Feb 19 05:27:38 crc kubenswrapper[5012]: I0219 05:27:38.347496 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-48wp9"] Feb 19 05:27:38 crc kubenswrapper[5012]: W0219 05:27:38.384772 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b13dfa9_14e1_4ad5_b6c6_f86486a73e9a.slice/crio-d28f8c0cd228cb43c2f0346277beec93d922e7fa5ce5493ec945b62c4230d6ab WatchSource:0}: Error finding container d28f8c0cd228cb43c2f0346277beec93d922e7fa5ce5493ec945b62c4230d6ab: Status 404 returned error can't find the container with id d28f8c0cd228cb43c2f0346277beec93d922e7fa5ce5493ec945b62c4230d6ab Feb 19 05:27:38 crc kubenswrapper[5012]: I0219 05:27:38.732562 5012 generic.go:334] "Generic (PLEG): container finished" podID="e45c788c-c8a0-4563-8d05-71915e390342" containerID="4b13a012dcea4fefc2b4e7757fddd764d86d7e0aa4fa7cfad77d502f2efa1ea0" exitCode=0 Feb 19 05:27:38 crc kubenswrapper[5012]: I0219 05:27:38.732639 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rprhz" event={"ID":"e45c788c-c8a0-4563-8d05-71915e390342","Type":"ContainerDied","Data":"4b13a012dcea4fefc2b4e7757fddd764d86d7e0aa4fa7cfad77d502f2efa1ea0"} Feb 19 05:27:38 crc kubenswrapper[5012]: I0219 05:27:38.732662 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rprhz" event={"ID":"e45c788c-c8a0-4563-8d05-71915e390342","Type":"ContainerStarted","Data":"330c4277adf991cb8d45015f1cf3ae0cb9906f5605d279d3a2745e3670726677"} Feb 19 05:27:38 crc kubenswrapper[5012]: I0219 05:27:38.735729 5012 generic.go:334] "Generic (PLEG): container finished" podID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerID="beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d" exitCode=0 Feb 19 05:27:38 crc kubenswrapper[5012]: I0219 05:27:38.735799 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48wp9" event={"ID":"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a","Type":"ContainerDied","Data":"beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d"} Feb 19 05:27:38 crc kubenswrapper[5012]: I0219 05:27:38.735835 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48wp9" event={"ID":"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a","Type":"ContainerStarted","Data":"d28f8c0cd228cb43c2f0346277beec93d922e7fa5ce5493ec945b62c4230d6ab"} Feb 19 05:27:38 crc kubenswrapper[5012]: I0219 05:27:38.744641 5012 generic.go:334] "Generic (PLEG): container finished" podID="13070b31-1da3-4cbd-8281-072d0ab1a3dd" containerID="c1adebc5f1403256c42763a66b6e381a177799c4b736890ae0e7d77d62d55407" exitCode=0 Feb 19 05:27:38 crc kubenswrapper[5012]: I0219 05:27:38.744696 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"13070b31-1da3-4cbd-8281-072d0ab1a3dd","Type":"ContainerDied","Data":"c1adebc5f1403256c42763a66b6e381a177799c4b736890ae0e7d77d62d55407"} Feb 19 05:27:39 crc kubenswrapper[5012]: I0219 05:27:39.017600 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 05:27:39 crc kubenswrapper[5012]: I0219 05:27:39.090834 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6954f621-15eb-4515-8855-5bf05a7119c5-kubelet-dir\") pod \"6954f621-15eb-4515-8855-5bf05a7119c5\" (UID: \"6954f621-15eb-4515-8855-5bf05a7119c5\") " Feb 19 05:27:39 crc kubenswrapper[5012]: I0219 05:27:39.091022 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6954f621-15eb-4515-8855-5bf05a7119c5-kube-api-access\") pod \"6954f621-15eb-4515-8855-5bf05a7119c5\" (UID: \"6954f621-15eb-4515-8855-5bf05a7119c5\") " Feb 19 05:27:39 crc kubenswrapper[5012]: I0219 05:27:39.092949 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6954f621-15eb-4515-8855-5bf05a7119c5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6954f621-15eb-4515-8855-5bf05a7119c5" (UID: "6954f621-15eb-4515-8855-5bf05a7119c5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:27:39 crc kubenswrapper[5012]: I0219 05:27:39.106737 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6954f621-15eb-4515-8855-5bf05a7119c5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6954f621-15eb-4515-8855-5bf05a7119c5" (UID: "6954f621-15eb-4515-8855-5bf05a7119c5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:27:39 crc kubenswrapper[5012]: I0219 05:27:39.192924 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6954f621-15eb-4515-8855-5bf05a7119c5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 05:27:39 crc kubenswrapper[5012]: I0219 05:27:39.192953 5012 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6954f621-15eb-4515-8855-5bf05a7119c5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 05:27:39 crc kubenswrapper[5012]: I0219 05:27:39.766635 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6954f621-15eb-4515-8855-5bf05a7119c5","Type":"ContainerDied","Data":"1ccd16d1972d25b1763f5be3a7ec9e3f85e4b8a335594d9e379233169b3aff08"} Feb 19 05:27:39 crc kubenswrapper[5012]: I0219 05:27:39.766677 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ccd16d1972d25b1763f5be3a7ec9e3f85e4b8a335594d9e379233169b3aff08" Feb 19 05:27:39 crc kubenswrapper[5012]: I0219 05:27:39.766770 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 05:27:40 crc kubenswrapper[5012]: I0219 05:27:40.091771 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 05:27:40 crc kubenswrapper[5012]: I0219 05:27:40.124610 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kubelet-dir\") pod \"13070b31-1da3-4cbd-8281-072d0ab1a3dd\" (UID: \"13070b31-1da3-4cbd-8281-072d0ab1a3dd\") " Feb 19 05:27:40 crc kubenswrapper[5012]: I0219 05:27:40.124675 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kube-api-access\") pod \"13070b31-1da3-4cbd-8281-072d0ab1a3dd\" (UID: \"13070b31-1da3-4cbd-8281-072d0ab1a3dd\") " Feb 19 05:27:40 crc kubenswrapper[5012]: I0219 05:27:40.124741 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "13070b31-1da3-4cbd-8281-072d0ab1a3dd" (UID: "13070b31-1da3-4cbd-8281-072d0ab1a3dd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:27:40 crc kubenswrapper[5012]: I0219 05:27:40.124940 5012 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 05:27:40 crc kubenswrapper[5012]: I0219 05:27:40.144420 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "13070b31-1da3-4cbd-8281-072d0ab1a3dd" (UID: "13070b31-1da3-4cbd-8281-072d0ab1a3dd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:27:40 crc kubenswrapper[5012]: I0219 05:27:40.226344 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/13070b31-1da3-4cbd-8281-072d0ab1a3dd-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 05:27:40 crc kubenswrapper[5012]: I0219 05:27:40.785920 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"13070b31-1da3-4cbd-8281-072d0ab1a3dd","Type":"ContainerDied","Data":"87ab6eb483767ec772ec703bf757d37c8c24a75c4922e31a84e430bedd92a159"} Feb 19 05:27:40 crc kubenswrapper[5012]: I0219 05:27:40.786218 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87ab6eb483767ec772ec703bf757d37c8c24a75c4922e31a84e430bedd92a159" Feb 19 05:27:40 crc kubenswrapper[5012]: I0219 05:27:40.786190 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 05:27:42 crc kubenswrapper[5012]: I0219 05:27:42.106833 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-x2l69" Feb 19 05:27:44 crc kubenswrapper[5012]: I0219 05:27:44.431043 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:27:44 crc kubenswrapper[5012]: I0219 05:27:44.432128 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:27:46 crc kubenswrapper[5012]: I0219 05:27:46.359601 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:46 crc kubenswrapper[5012]: I0219 05:27:46.363680 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:27:46 crc kubenswrapper[5012]: I0219 05:27:46.953053 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-tjxj6" Feb 19 05:27:48 crc kubenswrapper[5012]: I0219 05:27:48.561826 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:48 crc kubenswrapper[5012]: I0219 05:27:48.575037 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2e231950-a365-4a82-9481-05fdac171449-metrics-certs\") pod \"network-metrics-daemon-q5cb2\" (UID: \"2e231950-a365-4a82-9481-05fdac171449\") " pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:48 crc kubenswrapper[5012]: I0219 05:27:48.628214 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-q5cb2" Feb 19 05:27:54 crc kubenswrapper[5012]: I0219 05:27:54.398441 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:28:02 crc kubenswrapper[5012]: E0219 05:28:02.994874 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 19 05:28:02 crc kubenswrapper[5012]: E0219 05:28:02.995929 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plzvr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4xvs8_openshift-marketplace(a7ce4c2b-d3b7-4881-91fe-49f7103f12b9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 05:28:02 crc kubenswrapper[5012]: E0219 05:28:02.997142 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-4xvs8" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" Feb 19 05:28:05 crc kubenswrapper[5012]: E0219 05:28:05.025347 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-4xvs8" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" Feb 19 05:28:06 crc kubenswrapper[5012]: I0219 05:28:06.682236 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-mt2l6" Feb 19 05:28:09 crc kubenswrapper[5012]: E0219 05:28:09.363869 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 19 05:28:09 crc kubenswrapper[5012]: E0219 05:28:09.364999 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tzlc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-br86z_openshift-marketplace(e16bf8e1-cd8b-48fc-9726-40c1b397a6bc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 05:28:09 crc kubenswrapper[5012]: E0219 05:28:09.366203 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-br86z" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" Feb 19 05:28:09 crc kubenswrapper[5012]: E0219 05:28:09.425360 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 19 05:28:09 crc kubenswrapper[5012]: E0219 05:28:09.425535 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pr49f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-rprhz_openshift-marketplace(e45c788c-c8a0-4563-8d05-71915e390342): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 05:28:09 crc kubenswrapper[5012]: E0219 05:28:09.426878 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-rprhz" podUID="e45c788c-c8a0-4563-8d05-71915e390342" Feb 19 05:28:09 crc kubenswrapper[5012]: E0219 05:28:09.458636 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 19 05:28:09 crc kubenswrapper[5012]: E0219 05:28:09.459078 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlfxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-48wp9_openshift-marketplace(7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 05:28:09 crc kubenswrapper[5012]: E0219 05:28:09.460385 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-48wp9" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" Feb 19 05:28:09 crc kubenswrapper[5012]: I0219 05:28:09.823074 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-q5cb2"] Feb 19 05:28:09 crc kubenswrapper[5012]: W0219 05:28:09.890621 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e231950_a365_4a82_9481_05fdac171449.slice/crio-b2e06b54c13f611969946e31516345aec736d4f562ad6bf9bfc68714c955cbbc WatchSource:0}: Error finding container b2e06b54c13f611969946e31516345aec736d4f562ad6bf9bfc68714c955cbbc: Status 404 returned error can't find the container with id b2e06b54c13f611969946e31516345aec736d4f562ad6bf9bfc68714c955cbbc Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.017787 5012 generic.go:334] "Generic (PLEG): container finished" podID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerID="87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4" exitCode=0 Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.017841 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q7vk" event={"ID":"6173dc70-80d4-4f9f-9129-898b2dc38692","Type":"ContainerDied","Data":"87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4"} Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.020076 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" event={"ID":"2e231950-a365-4a82-9481-05fdac171449","Type":"ContainerStarted","Data":"b2e06b54c13f611969946e31516345aec736d4f562ad6bf9bfc68714c955cbbc"} Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.024343 5012 generic.go:334] "Generic (PLEG): container finished" podID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerID="b38c4d760b78c9580d7920d8d103f03ae36a4fb22594d35317c5a0fc8161982d" exitCode=0 Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.024419 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29nf4" event={"ID":"185ea561-a45e-49e1-a46b-f9bf9f6d2527","Type":"ContainerDied","Data":"b38c4d760b78c9580d7920d8d103f03ae36a4fb22594d35317c5a0fc8161982d"} Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.027371 5012 generic.go:334] "Generic (PLEG): container finished" podID="7b9a1165-24e0-4062-b805-0f8262822507" containerID="4d96789a875fc9919836ff36dc1d21b427a832c3292532d47b588b770f2a75ed" exitCode=0 Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.027460 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrjxk" event={"ID":"7b9a1165-24e0-4062-b805-0f8262822507","Type":"ContainerDied","Data":"4d96789a875fc9919836ff36dc1d21b427a832c3292532d47b588b770f2a75ed"} Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.032757 5012 generic.go:334] "Generic (PLEG): container finished" podID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerID="0cfb2a088fe11f89edf830fa194013f6aaa648491c76af6a0be0b1ed87f083f2" exitCode=0 Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.032884 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x576d" event={"ID":"2269b2c9-4876-43e3-85ce-9650ffec804f","Type":"ContainerDied","Data":"0cfb2a088fe11f89edf830fa194013f6aaa648491c76af6a0be0b1ed87f083f2"} Feb 19 05:28:10 crc kubenswrapper[5012]: E0219 05:28:10.035718 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-rprhz" podUID="e45c788c-c8a0-4563-8d05-71915e390342" Feb 19 05:28:10 crc kubenswrapper[5012]: E0219 05:28:10.040034 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-48wp9" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" Feb 19 05:28:10 crc kubenswrapper[5012]: E0219 05:28:10.041788 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-br86z" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.973745 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 05:28:10 crc kubenswrapper[5012]: E0219 05:28:10.976029 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13070b31-1da3-4cbd-8281-072d0ab1a3dd" containerName="pruner" Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.976055 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="13070b31-1da3-4cbd-8281-072d0ab1a3dd" containerName="pruner" Feb 19 05:28:10 crc kubenswrapper[5012]: E0219 05:28:10.976080 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6954f621-15eb-4515-8855-5bf05a7119c5" containerName="pruner" Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.976092 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6954f621-15eb-4515-8855-5bf05a7119c5" containerName="pruner" Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.976261 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="13070b31-1da3-4cbd-8281-072d0ab1a3dd" containerName="pruner" Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.976283 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6954f621-15eb-4515-8855-5bf05a7119c5" containerName="pruner" Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.978771 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.984355 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 05:28:10 crc kubenswrapper[5012]: I0219 05:28:10.994490 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.003734 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.042338 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x576d" event={"ID":"2269b2c9-4876-43e3-85ce-9650ffec804f","Type":"ContainerStarted","Data":"1f7b0db88035160e95fb281ad896b78f2deead667d95c19e81640e666f8610f7"} Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.045984 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q7vk" event={"ID":"6173dc70-80d4-4f9f-9129-898b2dc38692","Type":"ContainerStarted","Data":"ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf"} Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.051132 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" event={"ID":"2e231950-a365-4a82-9481-05fdac171449","Type":"ContainerStarted","Data":"30e5371ddc17f6259cc33364fae311112285fee802719a505b42facea40f8c67"} Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.051164 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-q5cb2" event={"ID":"2e231950-a365-4a82-9481-05fdac171449","Type":"ContainerStarted","Data":"4bc8b351abe79500a1634b081e0952c1dd89a39761227cfe52a7e9bfe0b207c8"} Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.054456 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29nf4" event={"ID":"185ea561-a45e-49e1-a46b-f9bf9f6d2527","Type":"ContainerStarted","Data":"ee07414de7a83d1212fd24fac006255c845d66e5f8765acbd5026e0f77d5182b"} Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.057124 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrjxk" event={"ID":"7b9a1165-24e0-4062-b805-0f8262822507","Type":"ContainerStarted","Data":"70dff26f289767b3751863d9c38507087e8b580a75adbd7af49ca49b727a95a9"} Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.066697 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x576d" podStartSLOduration=2.313199768 podStartE2EDuration="35.066681602s" podCreationTimestamp="2026-02-19 05:27:36 +0000 UTC" firstStartedPulling="2026-02-19 05:27:37.723505662 +0000 UTC m=+153.756828231" lastFinishedPulling="2026-02-19 05:28:10.476987456 +0000 UTC m=+186.510310065" observedRunningTime="2026-02-19 05:28:11.065709705 +0000 UTC m=+187.099032274" watchObservedRunningTime="2026-02-19 05:28:11.066681602 +0000 UTC m=+187.100004171" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.086955 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-29nf4" podStartSLOduration=2.135048429 podStartE2EDuration="35.086935736s" podCreationTimestamp="2026-02-19 05:27:36 +0000 UTC" firstStartedPulling="2026-02-19 05:27:37.714982598 +0000 UTC m=+153.748305167" lastFinishedPulling="2026-02-19 05:28:10.666869895 +0000 UTC m=+186.700192474" observedRunningTime="2026-02-19 05:28:11.085449106 +0000 UTC m=+187.118771685" watchObservedRunningTime="2026-02-19 05:28:11.086935736 +0000 UTC m=+187.120258315" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.117290 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.117426 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.137212 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xrjxk" podStartSLOduration=2.293232252 podStartE2EDuration="37.137190402s" podCreationTimestamp="2026-02-19 05:27:34 +0000 UTC" firstStartedPulling="2026-02-19 05:27:35.593530015 +0000 UTC m=+151.626852584" lastFinishedPulling="2026-02-19 05:28:10.437488155 +0000 UTC m=+186.470810734" observedRunningTime="2026-02-19 05:28:11.135427874 +0000 UTC m=+187.168750453" watchObservedRunningTime="2026-02-19 05:28:11.137190402 +0000 UTC m=+187.170512991" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.140695 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-q5cb2" podStartSLOduration=165.140684658 podStartE2EDuration="2m45.140684658s" podCreationTimestamp="2026-02-19 05:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:28:11.10496397 +0000 UTC m=+187.138286539" watchObservedRunningTime="2026-02-19 05:28:11.140684658 +0000 UTC m=+187.174007227" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.161139 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5q7vk" podStartSLOduration=2.270927781 podStartE2EDuration="37.161118167s" podCreationTimestamp="2026-02-19 05:27:34 +0000 UTC" firstStartedPulling="2026-02-19 05:27:35.569566439 +0000 UTC m=+151.602889008" lastFinishedPulling="2026-02-19 05:28:10.459756815 +0000 UTC m=+186.493079394" observedRunningTime="2026-02-19 05:28:11.159685858 +0000 UTC m=+187.193008417" watchObservedRunningTime="2026-02-19 05:28:11.161118167 +0000 UTC m=+187.194440736" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.218507 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.218603 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.219633 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.282159 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.328567 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 05:28:11 crc kubenswrapper[5012]: I0219 05:28:11.606464 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 05:28:12 crc kubenswrapper[5012]: I0219 05:28:12.066045 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6","Type":"ContainerStarted","Data":"2827b8b3cec7ebda760fcd4cccdb82fd6769198232e34770d264b12d69853428"} Feb 19 05:28:12 crc kubenswrapper[5012]: I0219 05:28:12.066354 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6","Type":"ContainerStarted","Data":"272aa842b44d1361b81ebd3418e0e6febdf25993404d5a07b1857e9a8cbdda1e"} Feb 19 05:28:12 crc kubenswrapper[5012]: I0219 05:28:12.081753 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.081724103 podStartE2EDuration="2.081724103s" podCreationTimestamp="2026-02-19 05:28:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:28:12.080408597 +0000 UTC m=+188.113731166" watchObservedRunningTime="2026-02-19 05:28:12.081724103 +0000 UTC m=+188.115046672" Feb 19 05:28:13 crc kubenswrapper[5012]: I0219 05:28:13.026415 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 05:28:13 crc kubenswrapper[5012]: I0219 05:28:13.079402 5012 generic.go:334] "Generic (PLEG): container finished" podID="dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6" containerID="2827b8b3cec7ebda760fcd4cccdb82fd6769198232e34770d264b12d69853428" exitCode=0 Feb 19 05:28:13 crc kubenswrapper[5012]: I0219 05:28:13.079455 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6","Type":"ContainerDied","Data":"2827b8b3cec7ebda760fcd4cccdb82fd6769198232e34770d264b12d69853428"} Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.349748 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.368338 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kube-api-access\") pod \"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6\" (UID: \"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6\") " Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.368436 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kubelet-dir\") pod \"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6\" (UID: \"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6\") " Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.368667 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6" (UID: "dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.378890 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6" (UID: "dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.430283 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.430385 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.470207 5012 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.470234 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.529705 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.530687 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.779735 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:28:14 crc kubenswrapper[5012]: I0219 05:28:14.779778 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.045656 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6mmvm"] Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.067597 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.071082 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.095324 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.095675 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6","Type":"ContainerDied","Data":"272aa842b44d1361b81ebd3418e0e6febdf25993404d5a07b1857e9a8cbdda1e"} Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.095697 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="272aa842b44d1361b81ebd3418e0e6febdf25993404d5a07b1857e9a8cbdda1e" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.148213 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.149126 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.763927 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 05:28:15 crc kubenswrapper[5012]: E0219 05:28:15.764207 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6" containerName="pruner" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.764225 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6" containerName="pruner" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.764367 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="dac8a9d5-69fe-4fd9-9e10-e9ff437db8a6" containerName="pruner" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.764794 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.767165 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.770018 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.774444 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.787271 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kube-api-access\") pod \"installer-9-crc\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.787403 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-var-lock\") pod \"installer-9-crc\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.787457 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.889090 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kube-api-access\") pod \"installer-9-crc\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.889139 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-var-lock\") pod \"installer-9-crc\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.889159 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.889259 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.889271 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-var-lock\") pod \"installer-9-crc\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:15 crc kubenswrapper[5012]: I0219 05:28:15.915772 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kube-api-access\") pod \"installer-9-crc\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:16 crc kubenswrapper[5012]: I0219 05:28:16.097941 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:16 crc kubenswrapper[5012]: I0219 05:28:16.330495 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 05:28:16 crc kubenswrapper[5012]: I0219 05:28:16.343830 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:28:16 crc kubenswrapper[5012]: I0219 05:28:16.343895 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:28:16 crc kubenswrapper[5012]: I0219 05:28:16.410820 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:28:16 crc kubenswrapper[5012]: I0219 05:28:16.777252 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:28:16 crc kubenswrapper[5012]: I0219 05:28:16.778382 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:28:16 crc kubenswrapper[5012]: I0219 05:28:16.833815 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:28:17 crc kubenswrapper[5012]: I0219 05:28:17.113549 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a60ebe63-e6e8-4716-b6a7-09471bd1761c","Type":"ContainerStarted","Data":"c50ce29ccaa2a6dec6251ba3718a50f9703f02c1604926defd790f301c9095a8"} Feb 19 05:28:17 crc kubenswrapper[5012]: I0219 05:28:17.113627 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a60ebe63-e6e8-4716-b6a7-09471bd1761c","Type":"ContainerStarted","Data":"52bbe53adc0bf39915d8efea51ec4cc82fe83aee77b31b0c3c447b900626737b"} Feb 19 05:28:17 crc kubenswrapper[5012]: I0219 05:28:17.133668 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.133649097 podStartE2EDuration="2.133649097s" podCreationTimestamp="2026-02-19 05:28:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:28:17.133621646 +0000 UTC m=+193.166944245" watchObservedRunningTime="2026-02-19 05:28:17.133649097 +0000 UTC m=+193.166971666" Feb 19 05:28:17 crc kubenswrapper[5012]: I0219 05:28:17.169399 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:28:17 crc kubenswrapper[5012]: I0219 05:28:17.182012 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:28:18 crc kubenswrapper[5012]: I0219 05:28:18.126795 5012 generic.go:334] "Generic (PLEG): container finished" podID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerID="a95a4d514f0d6754b1714fed7c7959350d2abe5a30fa95a4004bef33fad2569c" exitCode=0 Feb 19 05:28:18 crc kubenswrapper[5012]: I0219 05:28:18.126893 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xvs8" event={"ID":"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9","Type":"ContainerDied","Data":"a95a4d514f0d6754b1714fed7c7959350d2abe5a30fa95a4004bef33fad2569c"} Feb 19 05:28:18 crc kubenswrapper[5012]: I0219 05:28:18.252674 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5q7vk"] Feb 19 05:28:18 crc kubenswrapper[5012]: I0219 05:28:18.253008 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5q7vk" podUID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerName="registry-server" containerID="cri-o://ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf" gracePeriod=2 Feb 19 05:28:18 crc kubenswrapper[5012]: I0219 05:28:18.746743 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:28:18 crc kubenswrapper[5012]: I0219 05:28:18.947346 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-catalog-content\") pod \"6173dc70-80d4-4f9f-9129-898b2dc38692\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " Feb 19 05:28:18 crc kubenswrapper[5012]: I0219 05:28:18.948000 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-utilities\") pod \"6173dc70-80d4-4f9f-9129-898b2dc38692\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " Feb 19 05:28:18 crc kubenswrapper[5012]: I0219 05:28:18.948054 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrsrw\" (UniqueName: \"kubernetes.io/projected/6173dc70-80d4-4f9f-9129-898b2dc38692-kube-api-access-nrsrw\") pod \"6173dc70-80d4-4f9f-9129-898b2dc38692\" (UID: \"6173dc70-80d4-4f9f-9129-898b2dc38692\") " Feb 19 05:28:18 crc kubenswrapper[5012]: I0219 05:28:18.948876 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-utilities" (OuterVolumeSpecName: "utilities") pod "6173dc70-80d4-4f9f-9129-898b2dc38692" (UID: "6173dc70-80d4-4f9f-9129-898b2dc38692"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:28:18 crc kubenswrapper[5012]: I0219 05:28:18.958514 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6173dc70-80d4-4f9f-9129-898b2dc38692-kube-api-access-nrsrw" (OuterVolumeSpecName: "kube-api-access-nrsrw") pod "6173dc70-80d4-4f9f-9129-898b2dc38692" (UID: "6173dc70-80d4-4f9f-9129-898b2dc38692"). InnerVolumeSpecName "kube-api-access-nrsrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.012019 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6173dc70-80d4-4f9f-9129-898b2dc38692" (UID: "6173dc70-80d4-4f9f-9129-898b2dc38692"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.050438 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.050491 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrsrw\" (UniqueName: \"kubernetes.io/projected/6173dc70-80d4-4f9f-9129-898b2dc38692-kube-api-access-nrsrw\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.050504 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6173dc70-80d4-4f9f-9129-898b2dc38692-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.136359 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xvs8" event={"ID":"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9","Type":"ContainerStarted","Data":"bcadb8bab70733341b7bb0cee1dc27ad28111033c1f70563d157cf39fc870bc1"} Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.139688 5012 generic.go:334] "Generic (PLEG): container finished" podID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerID="ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf" exitCode=0 Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.139751 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5q7vk" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.139751 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q7vk" event={"ID":"6173dc70-80d4-4f9f-9129-898b2dc38692","Type":"ContainerDied","Data":"ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf"} Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.139881 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5q7vk" event={"ID":"6173dc70-80d4-4f9f-9129-898b2dc38692","Type":"ContainerDied","Data":"51fb0c10b65b4e5eeccf825cbe8bef0aec67c350bcfafc478899d702eea9c2e4"} Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.139904 5012 scope.go:117] "RemoveContainer" containerID="ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.163137 5012 scope.go:117] "RemoveContainer" containerID="87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.177130 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4xvs8" podStartSLOduration=2.016196587 podStartE2EDuration="45.177107124s" podCreationTimestamp="2026-02-19 05:27:34 +0000 UTC" firstStartedPulling="2026-02-19 05:27:35.569540188 +0000 UTC m=+151.602862757" lastFinishedPulling="2026-02-19 05:28:18.730450725 +0000 UTC m=+194.763773294" observedRunningTime="2026-02-19 05:28:19.160344815 +0000 UTC m=+195.193667394" watchObservedRunningTime="2026-02-19 05:28:19.177107124 +0000 UTC m=+195.210429703" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.181345 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5q7vk"] Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.185151 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5q7vk"] Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.209094 5012 scope.go:117] "RemoveContainer" containerID="0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.242701 5012 scope.go:117] "RemoveContainer" containerID="ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf" Feb 19 05:28:19 crc kubenswrapper[5012]: E0219 05:28:19.243540 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf\": container with ID starting with ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf not found: ID does not exist" containerID="ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.243604 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf"} err="failed to get container status \"ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf\": rpc error: code = NotFound desc = could not find container \"ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf\": container with ID starting with ba7f1143e5555843a21c6d8a6871a43dd2e28b7561a5e5320266197d397e5fbf not found: ID does not exist" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.243687 5012 scope.go:117] "RemoveContainer" containerID="87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4" Feb 19 05:28:19 crc kubenswrapper[5012]: E0219 05:28:19.244285 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4\": container with ID starting with 87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4 not found: ID does not exist" containerID="87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.244332 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4"} err="failed to get container status \"87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4\": rpc error: code = NotFound desc = could not find container \"87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4\": container with ID starting with 87ed2d73953cfb9fc58b74a18b63b38a22fda215606b991115d37c3d4ff47cd4 not found: ID does not exist" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.244353 5012 scope.go:117] "RemoveContainer" containerID="0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc" Feb 19 05:28:19 crc kubenswrapper[5012]: E0219 05:28:19.244768 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc\": container with ID starting with 0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc not found: ID does not exist" containerID="0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc" Feb 19 05:28:19 crc kubenswrapper[5012]: I0219 05:28:19.244797 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc"} err="failed to get container status \"0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc\": rpc error: code = NotFound desc = could not find container \"0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc\": container with ID starting with 0590dbccb6fa246898521f687667503a76ee300dce900341fc7bebe73d1eecdc not found: ID does not exist" Feb 19 05:28:20 crc kubenswrapper[5012]: I0219 05:28:20.448121 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x576d"] Feb 19 05:28:20 crc kubenswrapper[5012]: I0219 05:28:20.448697 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x576d" podUID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerName="registry-server" containerID="cri-o://1f7b0db88035160e95fb281ad896b78f2deead667d95c19e81640e666f8610f7" gracePeriod=2 Feb 19 05:28:20 crc kubenswrapper[5012]: I0219 05:28:20.711439 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6173dc70-80d4-4f9f-9129-898b2dc38692" path="/var/lib/kubelet/pods/6173dc70-80d4-4f9f-9129-898b2dc38692/volumes" Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.158896 5012 generic.go:334] "Generic (PLEG): container finished" podID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerID="1f7b0db88035160e95fb281ad896b78f2deead667d95c19e81640e666f8610f7" exitCode=0 Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.158948 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x576d" event={"ID":"2269b2c9-4876-43e3-85ce-9650ffec804f","Type":"ContainerDied","Data":"1f7b0db88035160e95fb281ad896b78f2deead667d95c19e81640e666f8610f7"} Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.785460 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.793011 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g9fc\" (UniqueName: \"kubernetes.io/projected/2269b2c9-4876-43e3-85ce-9650ffec804f-kube-api-access-4g9fc\") pod \"2269b2c9-4876-43e3-85ce-9650ffec804f\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.793065 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-catalog-content\") pod \"2269b2c9-4876-43e3-85ce-9650ffec804f\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.793110 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-utilities\") pod \"2269b2c9-4876-43e3-85ce-9650ffec804f\" (UID: \"2269b2c9-4876-43e3-85ce-9650ffec804f\") " Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.795555 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-utilities" (OuterVolumeSpecName: "utilities") pod "2269b2c9-4876-43e3-85ce-9650ffec804f" (UID: "2269b2c9-4876-43e3-85ce-9650ffec804f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.803756 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2269b2c9-4876-43e3-85ce-9650ffec804f-kube-api-access-4g9fc" (OuterVolumeSpecName: "kube-api-access-4g9fc") pod "2269b2c9-4876-43e3-85ce-9650ffec804f" (UID: "2269b2c9-4876-43e3-85ce-9650ffec804f"). InnerVolumeSpecName "kube-api-access-4g9fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.822122 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2269b2c9-4876-43e3-85ce-9650ffec804f" (UID: "2269b2c9-4876-43e3-85ce-9650ffec804f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.895088 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g9fc\" (UniqueName: \"kubernetes.io/projected/2269b2c9-4876-43e3-85ce-9650ffec804f-kube-api-access-4g9fc\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.895233 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:21 crc kubenswrapper[5012]: I0219 05:28:21.895251 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2269b2c9-4876-43e3-85ce-9650ffec804f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:22 crc kubenswrapper[5012]: I0219 05:28:22.168427 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x576d" event={"ID":"2269b2c9-4876-43e3-85ce-9650ffec804f","Type":"ContainerDied","Data":"e1cb3c13c7905eb67fe5b6fee6bfb21b5e93340cd8fbda0eba5b4e60709ae667"} Feb 19 05:28:22 crc kubenswrapper[5012]: I0219 05:28:22.168510 5012 scope.go:117] "RemoveContainer" containerID="1f7b0db88035160e95fb281ad896b78f2deead667d95c19e81640e666f8610f7" Feb 19 05:28:22 crc kubenswrapper[5012]: I0219 05:28:22.168561 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x576d" Feb 19 05:28:22 crc kubenswrapper[5012]: I0219 05:28:22.189162 5012 scope.go:117] "RemoveContainer" containerID="0cfb2a088fe11f89edf830fa194013f6aaa648491c76af6a0be0b1ed87f083f2" Feb 19 05:28:22 crc kubenswrapper[5012]: I0219 05:28:22.198943 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x576d"] Feb 19 05:28:22 crc kubenswrapper[5012]: I0219 05:28:22.205735 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x576d"] Feb 19 05:28:22 crc kubenswrapper[5012]: I0219 05:28:22.208146 5012 scope.go:117] "RemoveContainer" containerID="425cdf1067d5b62c628d2a89d10d8e953e7e1cf4ee46294d6c0c129fa2655d83" Feb 19 05:28:22 crc kubenswrapper[5012]: I0219 05:28:22.710357 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2269b2c9-4876-43e3-85ce-9650ffec804f" path="/var/lib/kubelet/pods/2269b2c9-4876-43e3-85ce-9650ffec804f/volumes" Feb 19 05:28:23 crc kubenswrapper[5012]: I0219 05:28:23.177126 5012 generic.go:334] "Generic (PLEG): container finished" podID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerID="cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b" exitCode=0 Feb 19 05:28:23 crc kubenswrapper[5012]: I0219 05:28:23.177226 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-br86z" event={"ID":"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc","Type":"ContainerDied","Data":"cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b"} Feb 19 05:28:24 crc kubenswrapper[5012]: I0219 05:28:24.356707 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:28:24 crc kubenswrapper[5012]: I0219 05:28:24.357050 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:28:24 crc kubenswrapper[5012]: I0219 05:28:24.411725 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:28:25 crc kubenswrapper[5012]: I0219 05:28:25.188163 5012 generic.go:334] "Generic (PLEG): container finished" podID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerID="6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166" exitCode=0 Feb 19 05:28:25 crc kubenswrapper[5012]: I0219 05:28:25.188240 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48wp9" event={"ID":"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a","Type":"ContainerDied","Data":"6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166"} Feb 19 05:28:25 crc kubenswrapper[5012]: I0219 05:28:25.237955 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:28:26 crc kubenswrapper[5012]: I0219 05:28:26.198791 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-br86z" event={"ID":"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc","Type":"ContainerStarted","Data":"02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8"} Feb 19 05:28:26 crc kubenswrapper[5012]: I0219 05:28:26.226314 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-br86z" podStartSLOduration=4.108732446 podStartE2EDuration="52.226279582s" podCreationTimestamp="2026-02-19 05:27:34 +0000 UTC" firstStartedPulling="2026-02-19 05:27:36.705923701 +0000 UTC m=+152.739246270" lastFinishedPulling="2026-02-19 05:28:24.823470837 +0000 UTC m=+200.856793406" observedRunningTime="2026-02-19 05:28:26.222029419 +0000 UTC m=+202.255351978" watchObservedRunningTime="2026-02-19 05:28:26.226279582 +0000 UTC m=+202.259602151" Feb 19 05:28:28 crc kubenswrapper[5012]: I0219 05:28:28.219675 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48wp9" event={"ID":"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a","Type":"ContainerStarted","Data":"dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889"} Feb 19 05:28:28 crc kubenswrapper[5012]: I0219 05:28:28.255154 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-48wp9" podStartSLOduration=2.679412905 podStartE2EDuration="51.255128056s" podCreationTimestamp="2026-02-19 05:27:37 +0000 UTC" firstStartedPulling="2026-02-19 05:27:38.739155088 +0000 UTC m=+154.772477657" lastFinishedPulling="2026-02-19 05:28:27.314870239 +0000 UTC m=+203.348192808" observedRunningTime="2026-02-19 05:28:28.252116845 +0000 UTC m=+204.285439454" watchObservedRunningTime="2026-02-19 05:28:28.255128056 +0000 UTC m=+204.288450655" Feb 19 05:28:30 crc kubenswrapper[5012]: I0219 05:28:30.238739 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rprhz" event={"ID":"e45c788c-c8a0-4563-8d05-71915e390342","Type":"ContainerStarted","Data":"842ea38ab87f30dad259cf1979c7ff921a55d4d9e323ba2c8e89f149a1596602"} Feb 19 05:28:31 crc kubenswrapper[5012]: I0219 05:28:31.251719 5012 generic.go:334] "Generic (PLEG): container finished" podID="e45c788c-c8a0-4563-8d05-71915e390342" containerID="842ea38ab87f30dad259cf1979c7ff921a55d4d9e323ba2c8e89f149a1596602" exitCode=0 Feb 19 05:28:31 crc kubenswrapper[5012]: I0219 05:28:31.251778 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rprhz" event={"ID":"e45c788c-c8a0-4563-8d05-71915e390342","Type":"ContainerDied","Data":"842ea38ab87f30dad259cf1979c7ff921a55d4d9e323ba2c8e89f149a1596602"} Feb 19 05:28:32 crc kubenswrapper[5012]: I0219 05:28:32.262785 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rprhz" event={"ID":"e45c788c-c8a0-4563-8d05-71915e390342","Type":"ContainerStarted","Data":"4d05b281db5317fbaf4180dd6656c44165f2aee89a9fa2e17cd24d4380132350"} Feb 19 05:28:32 crc kubenswrapper[5012]: I0219 05:28:32.292911 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rprhz" podStartSLOduration=2.3351563459999998 podStartE2EDuration="55.292885563s" podCreationTimestamp="2026-02-19 05:27:37 +0000 UTC" firstStartedPulling="2026-02-19 05:27:38.736611999 +0000 UTC m=+154.769934568" lastFinishedPulling="2026-02-19 05:28:31.694341186 +0000 UTC m=+207.727663785" observedRunningTime="2026-02-19 05:28:32.291788084 +0000 UTC m=+208.325110663" watchObservedRunningTime="2026-02-19 05:28:32.292885563 +0000 UTC m=+208.326208132" Feb 19 05:28:34 crc kubenswrapper[5012]: I0219 05:28:34.981499 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-br86z" Feb 19 05:28:34 crc kubenswrapper[5012]: I0219 05:28:34.981942 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-br86z" Feb 19 05:28:35 crc kubenswrapper[5012]: I0219 05:28:35.054271 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-br86z" Feb 19 05:28:35 crc kubenswrapper[5012]: I0219 05:28:35.351712 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-br86z" Feb 19 05:28:36 crc kubenswrapper[5012]: I0219 05:28:36.652024 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-br86z"] Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.299486 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-br86z" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerName="registry-server" containerID="cri-o://02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8" gracePeriod=2 Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.560470 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.560527 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.711584 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-br86z" Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.767080 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-catalog-content\") pod \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.767648 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzlc7\" (UniqueName: \"kubernetes.io/projected/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-kube-api-access-tzlc7\") pod \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.767693 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-utilities\") pod \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\" (UID: \"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc\") " Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.769024 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-utilities" (OuterVolumeSpecName: "utilities") pod "e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" (UID: "e16bf8e1-cd8b-48fc-9726-40c1b397a6bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.777389 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-kube-api-access-tzlc7" (OuterVolumeSpecName: "kube-api-access-tzlc7") pod "e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" (UID: "e16bf8e1-cd8b-48fc-9726-40c1b397a6bc"). InnerVolumeSpecName "kube-api-access-tzlc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.814956 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" (UID: "e16bf8e1-cd8b-48fc-9726-40c1b397a6bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.869160 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.869260 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzlc7\" (UniqueName: \"kubernetes.io/projected/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-kube-api-access-tzlc7\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.869284 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.981594 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:28:37 crc kubenswrapper[5012]: I0219 05:28:37.981666 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.036638 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.307925 5012 generic.go:334] "Generic (PLEG): container finished" podID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerID="02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8" exitCode=0 Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.308009 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-br86z" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.308058 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-br86z" event={"ID":"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc","Type":"ContainerDied","Data":"02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8"} Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.308140 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-br86z" event={"ID":"e16bf8e1-cd8b-48fc-9726-40c1b397a6bc","Type":"ContainerDied","Data":"be080096b804213f30565dd54118337146dcc411c16ff0c8a6962f9fd3f03e3a"} Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.308171 5012 scope.go:117] "RemoveContainer" containerID="02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.332188 5012 scope.go:117] "RemoveContainer" containerID="cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.348461 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-br86z"] Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.352421 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-br86z"] Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.377234 5012 scope.go:117] "RemoveContainer" containerID="e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.389671 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.402360 5012 scope.go:117] "RemoveContainer" containerID="02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8" Feb 19 05:28:38 crc kubenswrapper[5012]: E0219 05:28:38.403137 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8\": container with ID starting with 02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8 not found: ID does not exist" containerID="02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.403176 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8"} err="failed to get container status \"02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8\": rpc error: code = NotFound desc = could not find container \"02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8\": container with ID starting with 02370d0808ca51b7539e55bb41fcf7fbbc781d93bf50a6d08c1ac24377fe4ed8 not found: ID does not exist" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.403208 5012 scope.go:117] "RemoveContainer" containerID="cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b" Feb 19 05:28:38 crc kubenswrapper[5012]: E0219 05:28:38.403678 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b\": container with ID starting with cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b not found: ID does not exist" containerID="cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.403747 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b"} err="failed to get container status \"cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b\": rpc error: code = NotFound desc = could not find container \"cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b\": container with ID starting with cf075a3ebf0fe462e8351193d90b080484a636a510c7c14aed8a68c6045cc90b not found: ID does not exist" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.403811 5012 scope.go:117] "RemoveContainer" containerID="e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56" Feb 19 05:28:38 crc kubenswrapper[5012]: E0219 05:28:38.404351 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56\": container with ID starting with e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56 not found: ID does not exist" containerID="e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.404390 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56"} err="failed to get container status \"e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56\": rpc error: code = NotFound desc = could not find container \"e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56\": container with ID starting with e2e5ff45ec42e6f06d070ec9cc402e1cfe2bbf1379c34661c7d2e989ee904a56 not found: ID does not exist" Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.636934 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rprhz" podUID="e45c788c-c8a0-4563-8d05-71915e390342" containerName="registry-server" probeResult="failure" output=< Feb 19 05:28:38 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 05:28:38 crc kubenswrapper[5012]: > Feb 19 05:28:38 crc kubenswrapper[5012]: I0219 05:28:38.716342 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" path="/var/lib/kubelet/pods/e16bf8e1-cd8b-48fc-9726-40c1b397a6bc/volumes" Feb 19 05:28:39 crc kubenswrapper[5012]: I0219 05:28:39.852896 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-48wp9"] Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.078976 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" podUID="ce585ab5-2554-4d20-8789-cf5bfa8e45a7" containerName="oauth-openshift" containerID="cri-o://2dcd03507647b2936efc16e245313a460e479c8027de7859ce5d48daf431680d" gracePeriod=15 Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.325897 5012 generic.go:334] "Generic (PLEG): container finished" podID="ce585ab5-2554-4d20-8789-cf5bfa8e45a7" containerID="2dcd03507647b2936efc16e245313a460e479c8027de7859ce5d48daf431680d" exitCode=0 Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.326026 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" event={"ID":"ce585ab5-2554-4d20-8789-cf5bfa8e45a7","Type":"ContainerDied","Data":"2dcd03507647b2936efc16e245313a460e479c8027de7859ce5d48daf431680d"} Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.326115 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-48wp9" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerName="registry-server" containerID="cri-o://dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889" gracePeriod=2 Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.501807 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.608795 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-trusted-ca-bundle\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.608858 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-policies\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.608897 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmlnj\" (UniqueName: \"kubernetes.io/projected/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-kube-api-access-pmlnj\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.608968 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-provider-selection\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.609031 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-ocp-branding-template\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.609067 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-serving-cert\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.609094 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-service-ca\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.609129 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-idp-0-file-data\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.609161 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-cliconfig\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.609183 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-router-certs\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.609206 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-error\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.609234 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-dir\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.609262 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-session\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.609317 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-login\") pod \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\" (UID: \"ce585ab5-2554-4d20-8789-cf5bfa8e45a7\") " Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.611295 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.611523 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.612372 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.612531 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.619159 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.619697 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.620035 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-kube-api-access-pmlnj" (OuterVolumeSpecName: "kube-api-access-pmlnj") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "kube-api-access-pmlnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.620093 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.621083 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.621371 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.621690 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.624261 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.631857 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.637635 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ce585ab5-2554-4d20-8789-cf5bfa8e45a7" (UID: "ce585ab5-2554-4d20-8789-cf5bfa8e45a7"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.711173 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.711764 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.711793 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.711826 5012 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.711856 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.711884 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.711912 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.712042 5012 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.712065 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmlnj\" (UniqueName: \"kubernetes.io/projected/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-kube-api-access-pmlnj\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.712087 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.712109 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.712133 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.712161 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:40 crc kubenswrapper[5012]: I0219 05:28:40.712192 5012 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ce585ab5-2554-4d20-8789-cf5bfa8e45a7-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.266866 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.324454 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlfxl\" (UniqueName: \"kubernetes.io/projected/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-kube-api-access-rlfxl\") pod \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.326579 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-utilities\") pod \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.326709 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-catalog-content\") pod \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\" (UID: \"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a\") " Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.329139 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-utilities" (OuterVolumeSpecName: "utilities") pod "7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" (UID: "7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.333714 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-kube-api-access-rlfxl" (OuterVolumeSpecName: "kube-api-access-rlfxl") pod "7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" (UID: "7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a"). InnerVolumeSpecName "kube-api-access-rlfxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.336760 5012 generic.go:334] "Generic (PLEG): container finished" podID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerID="dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889" exitCode=0 Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.336849 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-48wp9" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.336852 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48wp9" event={"ID":"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a","Type":"ContainerDied","Data":"dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889"} Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.336933 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-48wp9" event={"ID":"7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a","Type":"ContainerDied","Data":"d28f8c0cd228cb43c2f0346277beec93d922e7fa5ce5493ec945b62c4230d6ab"} Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.336960 5012 scope.go:117] "RemoveContainer" containerID="dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.345365 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" event={"ID":"ce585ab5-2554-4d20-8789-cf5bfa8e45a7","Type":"ContainerDied","Data":"bdf60105a735686277da3c5b1467ac389878a76a65699e5227c68bdc76452b4e"} Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.345486 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6mmvm" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.380826 5012 scope.go:117] "RemoveContainer" containerID="6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.383884 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6mmvm"] Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.387207 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6mmvm"] Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.409015 5012 scope.go:117] "RemoveContainer" containerID="beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.431785 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlfxl\" (UniqueName: \"kubernetes.io/projected/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-kube-api-access-rlfxl\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.431832 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.432467 5012 scope.go:117] "RemoveContainer" containerID="dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889" Feb 19 05:28:41 crc kubenswrapper[5012]: E0219 05:28:41.433213 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889\": container with ID starting with dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889 not found: ID does not exist" containerID="dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.433279 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889"} err="failed to get container status \"dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889\": rpc error: code = NotFound desc = could not find container \"dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889\": container with ID starting with dc3de7cbc4ca8c5963b37a69736a8b89cf1fd18522dcaa12072b8a68e64e5889 not found: ID does not exist" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.433349 5012 scope.go:117] "RemoveContainer" containerID="6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166" Feb 19 05:28:41 crc kubenswrapper[5012]: E0219 05:28:41.433805 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166\": container with ID starting with 6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166 not found: ID does not exist" containerID="6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.433852 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166"} err="failed to get container status \"6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166\": rpc error: code = NotFound desc = could not find container \"6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166\": container with ID starting with 6a1182e8b31ed88b12c7b936e28f7add730a39e4fcd34d4b8e3474c2c02c0166 not found: ID does not exist" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.433881 5012 scope.go:117] "RemoveContainer" containerID="beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d" Feb 19 05:28:41 crc kubenswrapper[5012]: E0219 05:28:41.434256 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d\": container with ID starting with beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d not found: ID does not exist" containerID="beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.434295 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d"} err="failed to get container status \"beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d\": rpc error: code = NotFound desc = could not find container \"beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d\": container with ID starting with beb2aeb76aad6c4e54925e8d07df252658a5d093de23de3bfd8c5d38cee9514d not found: ID does not exist" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.434345 5012 scope.go:117] "RemoveContainer" containerID="2dcd03507647b2936efc16e245313a460e479c8027de7859ce5d48daf431680d" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.504122 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" (UID: "7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.533685 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.683943 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-48wp9"] Feb 19 05:28:41 crc kubenswrapper[5012]: I0219 05:28:41.689619 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-48wp9"] Feb 19 05:28:42 crc kubenswrapper[5012]: I0219 05:28:42.712555 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" path="/var/lib/kubelet/pods/7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a/volumes" Feb 19 05:28:42 crc kubenswrapper[5012]: I0219 05:28:42.714653 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce585ab5-2554-4d20-8789-cf5bfa8e45a7" path="/var/lib/kubelet/pods/ce585ab5-2554-4d20-8789-cf5bfa8e45a7/volumes" Feb 19 05:28:44 crc kubenswrapper[5012]: I0219 05:28:44.430547 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:28:44 crc kubenswrapper[5012]: I0219 05:28:44.430625 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:28:44 crc kubenswrapper[5012]: I0219 05:28:44.430706 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:28:44 crc kubenswrapper[5012]: I0219 05:28:44.431818 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 05:28:44 crc kubenswrapper[5012]: I0219 05:28:44.431925 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049" gracePeriod=600 Feb 19 05:28:45 crc kubenswrapper[5012]: I0219 05:28:45.381494 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049" exitCode=0 Feb 19 05:28:45 crc kubenswrapper[5012]: I0219 05:28:45.381655 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049"} Feb 19 05:28:45 crc kubenswrapper[5012]: I0219 05:28:45.382383 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"f28c70f18d16a390f7b96cc5399b8c6c7031b7f62ee2bccc4e33b9c7c28fc6a0"} Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.332569 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg"] Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333465 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce585ab5-2554-4d20-8789-cf5bfa8e45a7" containerName="oauth-openshift" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333506 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce585ab5-2554-4d20-8789-cf5bfa8e45a7" containerName="oauth-openshift" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333531 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333547 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333569 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerName="extract-content" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333586 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerName="extract-content" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333612 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333630 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333656 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerName="extract-utilities" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333672 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerName="extract-utilities" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333705 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerName="extract-utilities" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333722 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerName="extract-utilities" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333744 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333761 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333787 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333803 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333827 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerName="extract-content" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333843 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerName="extract-content" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333862 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerName="extract-utilities" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333877 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerName="extract-utilities" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333901 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerName="extract-content" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333918 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerName="extract-content" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333942 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerName="extract-content" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333957 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerName="extract-content" Feb 19 05:28:46 crc kubenswrapper[5012]: E0219 05:28:46.333981 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerName="extract-utilities" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.333996 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerName="extract-utilities" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.334214 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="e16bf8e1-cd8b-48fc-9726-40c1b397a6bc" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.334242 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="2269b2c9-4876-43e3-85ce-9650ffec804f" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.334265 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b13dfa9-14e1-4ad5-b6c6-f86486a73e9a" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.334283 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6173dc70-80d4-4f9f-9129-898b2dc38692" containerName="registry-server" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.334339 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce585ab5-2554-4d20-8789-cf5bfa8e45a7" containerName="oauth-openshift" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.335100 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.341226 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.341461 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.341621 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.341799 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.341808 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.342018 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.343382 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.343549 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.344390 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.354244 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.354515 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.368969 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg"] Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.392695 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.394213 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.394896 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.399281 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-session\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.399413 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.399464 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.399499 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-template-login\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.399540 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.399572 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-template-error\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.399605 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.399897 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.399953 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.400068 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/259d2430-9728-4214-902c-aeafb7a74034-audit-dir\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.400135 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-audit-policies\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.400252 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqhxl\" (UniqueName: \"kubernetes.io/projected/259d2430-9728-4214-902c-aeafb7a74034-kube-api-access-lqhxl\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.400350 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.400375 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.402605 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502204 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-audit-policies\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502270 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqhxl\" (UniqueName: \"kubernetes.io/projected/259d2430-9728-4214-902c-aeafb7a74034-kube-api-access-lqhxl\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502333 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502358 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502392 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-session\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502425 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502460 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502489 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-template-login\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502516 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502539 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-template-error\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502564 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502610 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502648 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502699 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/259d2430-9728-4214-902c-aeafb7a74034-audit-dir\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.502789 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/259d2430-9728-4214-902c-aeafb7a74034-audit-dir\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.503566 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.503707 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-audit-policies\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.504415 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.506062 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.509078 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.510063 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.510815 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-template-login\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.511067 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-session\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.511653 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.511920 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.509832 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.516636 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/259d2430-9728-4214-902c-aeafb7a74034-v4-0-config-user-template-error\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.526719 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqhxl\" (UniqueName: \"kubernetes.io/projected/259d2430-9728-4214-902c-aeafb7a74034-kube-api-access-lqhxl\") pod \"oauth-openshift-64f4b9bb7f-lfvrg\" (UID: \"259d2430-9728-4214-902c-aeafb7a74034\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.684564 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:46 crc kubenswrapper[5012]: I0219 05:28:46.930242 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg"] Feb 19 05:28:46 crc kubenswrapper[5012]: W0219 05:28:46.935457 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod259d2430_9728_4214_902c_aeafb7a74034.slice/crio-2069268689d599dc4f86a7ada870241590d4b4feb7409dfc43d39ce5ecc857aa WatchSource:0}: Error finding container 2069268689d599dc4f86a7ada870241590d4b4feb7409dfc43d39ce5ecc857aa: Status 404 returned error can't find the container with id 2069268689d599dc4f86a7ada870241590d4b4feb7409dfc43d39ce5ecc857aa Feb 19 05:28:47 crc kubenswrapper[5012]: I0219 05:28:47.404704 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" event={"ID":"259d2430-9728-4214-902c-aeafb7a74034","Type":"ContainerStarted","Data":"cbefc13777846cb6503b214344af4b5d9528796f30de7fa0f13cde87379f05ff"} Feb 19 05:28:47 crc kubenswrapper[5012]: I0219 05:28:47.405058 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" event={"ID":"259d2430-9728-4214-902c-aeafb7a74034","Type":"ContainerStarted","Data":"2069268689d599dc4f86a7ada870241590d4b4feb7409dfc43d39ce5ecc857aa"} Feb 19 05:28:47 crc kubenswrapper[5012]: I0219 05:28:47.406630 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:47 crc kubenswrapper[5012]: I0219 05:28:47.440804 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" podStartSLOduration=32.440781022 podStartE2EDuration="32.440781022s" podCreationTimestamp="2026-02-19 05:28:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:28:47.440106124 +0000 UTC m=+223.473428733" watchObservedRunningTime="2026-02-19 05:28:47.440781022 +0000 UTC m=+223.474103621" Feb 19 05:28:47 crc kubenswrapper[5012]: I0219 05:28:47.634594 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:28:47 crc kubenswrapper[5012]: I0219 05:28:47.706815 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:28:47 crc kubenswrapper[5012]: I0219 05:28:47.776072 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-lfvrg" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.550119 5012 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.551004 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135" gracePeriod=15 Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.551135 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0" gracePeriod=15 Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.551192 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2" gracePeriod=15 Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.551187 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83" gracePeriod=15 Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.551289 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9" gracePeriod=15 Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.551951 5012 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 05:28:54 crc kubenswrapper[5012]: E0219 05:28:54.552179 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552199 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 05:28:54 crc kubenswrapper[5012]: E0219 05:28:54.552219 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552229 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 05:28:54 crc kubenswrapper[5012]: E0219 05:28:54.552239 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552249 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 05:28:54 crc kubenswrapper[5012]: E0219 05:28:54.552265 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552273 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 05:28:54 crc kubenswrapper[5012]: E0219 05:28:54.552284 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552292 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 05:28:54 crc kubenswrapper[5012]: E0219 05:28:54.552320 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552330 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 05:28:54 crc kubenswrapper[5012]: E0219 05:28:54.552339 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552347 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552463 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552474 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552491 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552502 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552514 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.552524 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.554124 5012 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.554680 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.558336 5012 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.626350 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.652936 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.652991 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.653036 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.653089 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.653121 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.653141 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.653174 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.653200 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.708471 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755048 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755151 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755219 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755227 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755332 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755378 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755428 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755478 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755498 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755588 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755625 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755649 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755699 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.755825 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.756158 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.756322 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: I0219 05:28:54.926409 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:28:54 crc kubenswrapper[5012]: E0219 05:28:54.960076 5012 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.110:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18958eb4a555e533 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 05:28:54.959285555 +0000 UTC m=+230.992608174,LastTimestamp:2026-02-19 05:28:54.959285555 +0000 UTC m=+230.992608174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 05:28:55 crc kubenswrapper[5012]: E0219 05:28:55.378025 5012 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:55 crc kubenswrapper[5012]: E0219 05:28:55.379190 5012 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:55 crc kubenswrapper[5012]: E0219 05:28:55.379697 5012 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:55 crc kubenswrapper[5012]: E0219 05:28:55.380098 5012 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:55 crc kubenswrapper[5012]: E0219 05:28:55.380548 5012 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.380601 5012 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 19 05:28:55 crc kubenswrapper[5012]: E0219 05:28:55.380950 5012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="200ms" Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.463589 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.465421 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.466549 5012 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0" exitCode=0 Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.466599 5012 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9" exitCode=0 Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.466617 5012 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83" exitCode=0 Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.466636 5012 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2" exitCode=2 Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.466659 5012 scope.go:117] "RemoveContainer" containerID="093795d54e42ffd6e667ddf6c55f8198229657299b90dc371c9925b94dac36de" Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.470932 5012 generic.go:334] "Generic (PLEG): container finished" podID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" containerID="c50ce29ccaa2a6dec6251ba3718a50f9703f02c1604926defd790f301c9095a8" exitCode=0 Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.471054 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a60ebe63-e6e8-4716-b6a7-09471bd1761c","Type":"ContainerDied","Data":"c50ce29ccaa2a6dec6251ba3718a50f9703f02c1604926defd790f301c9095a8"} Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.471998 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.472599 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.473927 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b6f9cb760466cabd1e0a03b9e7b38403b65eda373e574649db72eb2355616bd8"} Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.473974 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7aae5124379fb33aea819bacdd31748ede0457373ca9eeb45432122370cef8f9"} Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.475201 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:55 crc kubenswrapper[5012]: I0219 05:28:55.475714 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:55 crc kubenswrapper[5012]: E0219 05:28:55.581592 5012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="400ms" Feb 19 05:28:55 crc kubenswrapper[5012]: E0219 05:28:55.982479 5012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="800ms" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.483454 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.763376 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.764868 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.765366 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:56 crc kubenswrapper[5012]: E0219 05:28:56.784064 5012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="1.6s" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.891351 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kube-api-access\") pod \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.891491 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-var-lock\") pod \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.891544 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kubelet-dir\") pod \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\" (UID: \"a60ebe63-e6e8-4716-b6a7-09471bd1761c\") " Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.891639 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-var-lock" (OuterVolumeSpecName: "var-lock") pod "a60ebe63-e6e8-4716-b6a7-09471bd1761c" (UID: "a60ebe63-e6e8-4716-b6a7-09471bd1761c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.891736 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a60ebe63-e6e8-4716-b6a7-09471bd1761c" (UID: "a60ebe63-e6e8-4716-b6a7-09471bd1761c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.892032 5012 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.892051 5012 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.896875 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a60ebe63-e6e8-4716-b6a7-09471bd1761c" (UID: "a60ebe63-e6e8-4716-b6a7-09471bd1761c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.992785 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a60ebe63-e6e8-4716-b6a7-09471bd1761c-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.993810 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.994486 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.995209 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.995851 5012 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:56 crc kubenswrapper[5012]: I0219 05:28:56.996447 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.093904 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.094049 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.094411 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.094473 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.094563 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.094707 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.095050 5012 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.095073 5012 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.095081 5012 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 19 05:28:57 crc kubenswrapper[5012]: E0219 05:28:57.260751 5012 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.110:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18958eb4a555e533 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 05:28:54.959285555 +0000 UTC m=+230.992608174,LastTimestamp:2026-02-19 05:28:54.959285555 +0000 UTC m=+230.992608174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.495589 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.497212 5012 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135" exitCode=0 Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.497272 5012 scope.go:117] "RemoveContainer" containerID="196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.497398 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.509889 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a60ebe63-e6e8-4716-b6a7-09471bd1761c","Type":"ContainerDied","Data":"52bbe53adc0bf39915d8efea51ec4cc82fe83aee77b31b0c3c447b900626737b"} Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.509937 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52bbe53adc0bf39915d8efea51ec4cc82fe83aee77b31b0c3c447b900626737b" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.510021 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.520366 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.520602 5012 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.520789 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.534374 5012 scope.go:117] "RemoveContainer" containerID="a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.537048 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.537285 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.537477 5012 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.560493 5012 scope.go:117] "RemoveContainer" containerID="d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.588374 5012 scope.go:117] "RemoveContainer" containerID="526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.621973 5012 scope.go:117] "RemoveContainer" containerID="c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.645096 5012 scope.go:117] "RemoveContainer" containerID="fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.664886 5012 scope.go:117] "RemoveContainer" containerID="196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0" Feb 19 05:28:57 crc kubenswrapper[5012]: E0219 05:28:57.665528 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\": container with ID starting with 196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0 not found: ID does not exist" containerID="196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.665599 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0"} err="failed to get container status \"196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\": rpc error: code = NotFound desc = could not find container \"196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0\": container with ID starting with 196b3265c347af9cd092b86bdeaa576992b54bf90ccb5fa8c026bc8150c13ec0 not found: ID does not exist" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.665641 5012 scope.go:117] "RemoveContainer" containerID="a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9" Feb 19 05:28:57 crc kubenswrapper[5012]: E0219 05:28:57.666068 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\": container with ID starting with a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9 not found: ID does not exist" containerID="a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.666153 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9"} err="failed to get container status \"a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\": rpc error: code = NotFound desc = could not find container \"a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9\": container with ID starting with a81c9d2625e595bd100942601f7a03c6cac1964eafee47cca60d19b27d3ab7b9 not found: ID does not exist" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.666215 5012 scope.go:117] "RemoveContainer" containerID="d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83" Feb 19 05:28:57 crc kubenswrapper[5012]: E0219 05:28:57.666633 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\": container with ID starting with d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83 not found: ID does not exist" containerID="d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.666685 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83"} err="failed to get container status \"d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\": rpc error: code = NotFound desc = could not find container \"d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83\": container with ID starting with d74982e900b2f8dea9a42099dd6d0eff6a46391cdd130c3a192d74bbcc537f83 not found: ID does not exist" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.666718 5012 scope.go:117] "RemoveContainer" containerID="526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2" Feb 19 05:28:57 crc kubenswrapper[5012]: E0219 05:28:57.667076 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\": container with ID starting with 526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2 not found: ID does not exist" containerID="526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.667161 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2"} err="failed to get container status \"526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\": rpc error: code = NotFound desc = could not find container \"526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2\": container with ID starting with 526277cb2ca416eb5c094e92dcf51bc72cc9734013de4faec43cdf815c5761a2 not found: ID does not exist" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.667230 5012 scope.go:117] "RemoveContainer" containerID="c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135" Feb 19 05:28:57 crc kubenswrapper[5012]: E0219 05:28:57.667942 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\": container with ID starting with c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135 not found: ID does not exist" containerID="c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.667981 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135"} err="failed to get container status \"c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\": rpc error: code = NotFound desc = could not find container \"c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135\": container with ID starting with c037963f9b51be42121031c0730eb90e44676a6fcac10846a39fcd30dc4ce135 not found: ID does not exist" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.668002 5012 scope.go:117] "RemoveContainer" containerID="fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c" Feb 19 05:28:57 crc kubenswrapper[5012]: E0219 05:28:57.668505 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\": container with ID starting with fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c not found: ID does not exist" containerID="fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c" Feb 19 05:28:57 crc kubenswrapper[5012]: I0219 05:28:57.668552 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c"} err="failed to get container status \"fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\": rpc error: code = NotFound desc = could not find container \"fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c\": container with ID starting with fad6302cdfcdb9542852ef471562f9a082b513e24930c04ec9b40ffb0b963c7c not found: ID does not exist" Feb 19 05:28:58 crc kubenswrapper[5012]: E0219 05:28:58.385431 5012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="3.2s" Feb 19 05:28:58 crc kubenswrapper[5012]: I0219 05:28:58.710914 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 19 05:29:01 crc kubenswrapper[5012]: E0219 05:29:01.587380 5012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="6.4s" Feb 19 05:29:04 crc kubenswrapper[5012]: I0219 05:29:04.706910 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:04 crc kubenswrapper[5012]: I0219 05:29:04.707633 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:05 crc kubenswrapper[5012]: E0219 05:29:05.783396 5012 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.110:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" volumeName="registry-storage" Feb 19 05:29:07 crc kubenswrapper[5012]: E0219 05:29:07.261500 5012 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.110:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18958eb4a555e533 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 05:28:54.959285555 +0000 UTC m=+230.992608174,LastTimestamp:2026-02-19 05:28:54.959285555 +0000 UTC m=+230.992608174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 05:29:07 crc kubenswrapper[5012]: E0219 05:29:07.988988 5012 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.110:6443: connect: connection refused" interval="7s" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.604168 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.604281 5012 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583" exitCode=1 Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.604384 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583"} Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.605421 5012 scope.go:117] "RemoveContainer" containerID="e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.606003 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.607194 5012 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.607770 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.704060 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.711580 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.712064 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.712557 5012 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.728501 5012 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.728541 5012 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1" Feb 19 05:29:09 crc kubenswrapper[5012]: E0219 05:29:09.729170 5012 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:09 crc kubenswrapper[5012]: I0219 05:29:09.729728 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:09 crc kubenswrapper[5012]: W0219 05:29:09.770846 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-145c1437154aa1369dea3e1b974d5651a0ec0495c0efacd59b89fa5850af9323 WatchSource:0}: Error finding container 145c1437154aa1369dea3e1b974d5651a0ec0495c0efacd59b89fa5850af9323: Status 404 returned error can't find the container with id 145c1437154aa1369dea3e1b974d5651a0ec0495c0efacd59b89fa5850af9323 Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.611254 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.614402 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.614553 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7d416dde1b0d46276be91907a124098c4e88b5ed6b05a4907bd5048f78aeba0e"} Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.615676 5012 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.616204 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.616692 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.617192 5012 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="deb3b55ddd1daaca601a5db6b545862df01aec3ccbc7dc9516c84175845d0612" exitCode=0 Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.617254 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"deb3b55ddd1daaca601a5db6b545862df01aec3ccbc7dc9516c84175845d0612"} Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.620576 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"145c1437154aa1369dea3e1b974d5651a0ec0495c0efacd59b89fa5850af9323"} Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.621121 5012 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1" Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.621156 5012 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1" Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.621697 5012 status_manager.go:851] "Failed to get status for pod" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:10 crc kubenswrapper[5012]: E0219 05:29:10.621724 5012 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.622202 5012 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:10 crc kubenswrapper[5012]: I0219 05:29:10.622776 5012 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.110:6443: connect: connection refused" Feb 19 05:29:11 crc kubenswrapper[5012]: I0219 05:29:11.631801 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e711166a6f96dfd9b4c99d1b232e6415693e0391069db315fadb15f670110255"} Feb 19 05:29:11 crc kubenswrapper[5012]: I0219 05:29:11.631872 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"79199297292ba76371bcef8e3dbc8db37207a04b93b48a0a34f0d3003b1bf1b5"} Feb 19 05:29:11 crc kubenswrapper[5012]: I0219 05:29:11.631893 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cf6f0c3167844de76fec185a1b785bbde18fa1b90f768354082420d36a34d37c"} Feb 19 05:29:12 crc kubenswrapper[5012]: I0219 05:29:12.643958 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bcc26507116cf9054581a75d1d0606ee26f13eb9f3e765fb3afbeb4a6cea69dc"} Feb 19 05:29:12 crc kubenswrapper[5012]: I0219 05:29:12.644344 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d0a3474e0b7a76c454a495c38625c8965e1c308c8e645eb8311e3810660a1541"} Feb 19 05:29:12 crc kubenswrapper[5012]: I0219 05:29:12.644647 5012 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1" Feb 19 05:29:12 crc kubenswrapper[5012]: I0219 05:29:12.644665 5012 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1" Feb 19 05:29:12 crc kubenswrapper[5012]: I0219 05:29:12.644895 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:14 crc kubenswrapper[5012]: I0219 05:29:14.730424 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:14 crc kubenswrapper[5012]: I0219 05:29:14.730920 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:14 crc kubenswrapper[5012]: I0219 05:29:14.741760 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:17 crc kubenswrapper[5012]: I0219 05:29:17.266296 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:29:17 crc kubenswrapper[5012]: I0219 05:29:17.660438 5012 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:17 crc kubenswrapper[5012]: I0219 05:29:17.759706 5012 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="df47b349-1342-47a4-a6c3-ed3082d2e576" Feb 19 05:29:18 crc kubenswrapper[5012]: I0219 05:29:18.686843 5012 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1" Feb 19 05:29:18 crc kubenswrapper[5012]: I0219 05:29:18.686879 5012 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1" Feb 19 05:29:18 crc kubenswrapper[5012]: I0219 05:29:18.690585 5012 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="df47b349-1342-47a4-a6c3-ed3082d2e576" Feb 19 05:29:18 crc kubenswrapper[5012]: I0219 05:29:18.692898 5012 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://cf6f0c3167844de76fec185a1b785bbde18fa1b90f768354082420d36a34d37c" Feb 19 05:29:18 crc kubenswrapper[5012]: I0219 05:29:18.692930 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:19 crc kubenswrapper[5012]: I0219 05:29:19.694166 5012 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1" Feb 19 05:29:19 crc kubenswrapper[5012]: I0219 05:29:19.694229 5012 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c76e7a1-f9bd-47a7-ae70-a37c6f4149e1" Feb 19 05:29:19 crc kubenswrapper[5012]: I0219 05:29:19.698509 5012 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="df47b349-1342-47a4-a6c3-ed3082d2e576" Feb 19 05:29:20 crc kubenswrapper[5012]: I0219 05:29:20.611739 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:29:20 crc kubenswrapper[5012]: I0219 05:29:20.611993 5012 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 05:29:20 crc kubenswrapper[5012]: I0219 05:29:20.612050 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 05:29:27 crc kubenswrapper[5012]: I0219 05:29:27.198029 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 05:29:27 crc kubenswrapper[5012]: I0219 05:29:27.333518 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 05:29:27 crc kubenswrapper[5012]: I0219 05:29:27.930879 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 05:29:28 crc kubenswrapper[5012]: I0219 05:29:28.441124 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 05:29:28 crc kubenswrapper[5012]: I0219 05:29:28.483356 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 05:29:28 crc kubenswrapper[5012]: I0219 05:29:28.495383 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 05:29:28 crc kubenswrapper[5012]: I0219 05:29:28.497346 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 05:29:28 crc kubenswrapper[5012]: I0219 05:29:28.558934 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 05:29:28 crc kubenswrapper[5012]: I0219 05:29:28.588054 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 05:29:28 crc kubenswrapper[5012]: I0219 05:29:28.629480 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 05:29:28 crc kubenswrapper[5012]: I0219 05:29:28.693428 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 05:29:29 crc kubenswrapper[5012]: I0219 05:29:29.165823 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 05:29:29 crc kubenswrapper[5012]: I0219 05:29:29.260413 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 05:29:29 crc kubenswrapper[5012]: I0219 05:29:29.332055 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 05:29:29 crc kubenswrapper[5012]: I0219 05:29:29.620460 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 05:29:29 crc kubenswrapper[5012]: I0219 05:29:29.677662 5012 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 05:29:29 crc kubenswrapper[5012]: I0219 05:29:29.969170 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.020614 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.102554 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.268860 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.381806 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.558739 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.611899 5012 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.611983 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.613893 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.750925 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.798560 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.911930 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 05:29:30 crc kubenswrapper[5012]: I0219 05:29:30.988160 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.276912 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.290182 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.299468 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.331795 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.337459 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.423598 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.475426 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.485891 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.601831 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.623666 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.692358 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.800371 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.814022 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.927854 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 05:29:31 crc kubenswrapper[5012]: I0219 05:29:31.953428 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.017214 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.208957 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.260676 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.366154 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.399505 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.448805 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.479466 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.484821 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.564919 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.653929 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.665074 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.757243 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.961986 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 05:29:32 crc kubenswrapper[5012]: I0219 05:29:32.981515 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.138001 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.223132 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.279398 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.301969 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.324573 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.329648 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.341537 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.383190 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.394737 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.470857 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.612957 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.634969 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.639430 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.661118 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.898409 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.904459 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 05:29:33 crc kubenswrapper[5012]: I0219 05:29:33.946234 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.012271 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.123423 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.176633 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.304249 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.314109 5012 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.332493 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.419802 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.446891 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.467916 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.510431 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.511532 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.526338 5012 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.540929 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.596040 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.770799 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.816895 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.820504 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.823521 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.850295 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 05:29:34 crc kubenswrapper[5012]: I0219 05:29:34.868008 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.013626 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.015406 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.066396 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.074600 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.082946 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.218510 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.224797 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.339977 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.516786 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.523920 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.587848 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.652269 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.721538 5012 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.731556 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.732495 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.812980 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.828605 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.854986 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.860542 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.870891 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.881527 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 05:29:35 crc kubenswrapper[5012]: I0219 05:29:35.974007 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.249229 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.262863 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.464746 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.523361 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.553776 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.605840 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.609603 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.663196 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.709473 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.738341 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.751201 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.774447 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.796197 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.885980 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.908388 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.926692 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.952671 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.969990 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.981395 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.981986 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 05:29:36 crc kubenswrapper[5012]: I0219 05:29:36.995029 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.007466 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.067519 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.071839 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.096379 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.211744 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.269046 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.328231 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.470268 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.564180 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.722787 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.801528 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.806512 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.818455 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.932574 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.934502 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.941524 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 05:29:37 crc kubenswrapper[5012]: I0219 05:29:37.947398 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.022760 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.123643 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.177835 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.201837 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.269183 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.288117 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.347915 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.419179 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.487667 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.497355 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.497847 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.569551 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.574787 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.673949 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.706874 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.804397 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.812867 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.833540 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.880561 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.880814 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.913731 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.935151 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.954508 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 05:29:38 crc kubenswrapper[5012]: I0219 05:29:38.959113 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.014122 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.054098 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.184697 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.258441 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.305469 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.388873 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.409247 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.442027 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.470201 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.495806 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.570513 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.651631 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.679481 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.726837 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.789214 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.792983 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.802865 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 05:29:39 crc kubenswrapper[5012]: I0219 05:29:39.979559 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.202945 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.339999 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.342192 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.431414 5012 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.613349 5012 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.613458 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.613621 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.614697 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"7d416dde1b0d46276be91907a124098c4e88b5ed6b05a4907bd5048f78aeba0e"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.614926 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://7d416dde1b0d46276be91907a124098c4e88b5ed6b05a4907bd5048f78aeba0e" gracePeriod=30 Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.618704 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.756059 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.816859 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.841386 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.924903 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 05:29:40 crc kubenswrapper[5012]: I0219 05:29:40.963480 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.024185 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.055145 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.081906 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.164387 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.272028 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.275856 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.308027 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.326062 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.397247 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.420871 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.469867 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.500729 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.528878 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.675318 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.722285 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.772566 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.781080 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.806093 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 05:29:41 crc kubenswrapper[5012]: I0219 05:29:41.813903 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.050766 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.091021 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.107559 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.181456 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.194288 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.302128 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.342741 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.392032 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.557418 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.574185 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.619277 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 05:29:42 crc kubenswrapper[5012]: I0219 05:29:42.686826 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 05:29:43 crc kubenswrapper[5012]: I0219 05:29:43.135581 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 05:29:43 crc kubenswrapper[5012]: I0219 05:29:43.187402 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 05:29:43 crc kubenswrapper[5012]: I0219 05:29:43.709911 5012 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 05:29:43 crc kubenswrapper[5012]: I0219 05:29:43.715419 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=49.71539245 podStartE2EDuration="49.71539245s" podCreationTimestamp="2026-02-19 05:28:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:29:17.698443305 +0000 UTC m=+253.731765904" watchObservedRunningTime="2026-02-19 05:29:43.71539245 +0000 UTC m=+279.748715059" Feb 19 05:29:43 crc kubenswrapper[5012]: I0219 05:29:43.717092 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 05:29:43 crc kubenswrapper[5012]: I0219 05:29:43.717147 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 05:29:43 crc kubenswrapper[5012]: I0219 05:29:43.724538 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 05:29:43 crc kubenswrapper[5012]: I0219 05:29:43.770610 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=26.770553583 podStartE2EDuration="26.770553583s" podCreationTimestamp="2026-02-19 05:29:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:29:43.744679852 +0000 UTC m=+279.778002421" watchObservedRunningTime="2026-02-19 05:29:43.770553583 +0000 UTC m=+279.803876192" Feb 19 05:29:43 crc kubenswrapper[5012]: I0219 05:29:43.802832 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 05:29:43 crc kubenswrapper[5012]: I0219 05:29:43.807834 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 05:29:50 crc kubenswrapper[5012]: I0219 05:29:50.538121 5012 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 05:29:50 crc kubenswrapper[5012]: I0219 05:29:50.539763 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://b6f9cb760466cabd1e0a03b9e7b38403b65eda373e574649db72eb2355616bd8" gracePeriod=5 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.428184 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4xvs8"] Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.430734 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4xvs8" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerName="registry-server" containerID="cri-o://bcadb8bab70733341b7bb0cee1dc27ad28111033c1f70563d157cf39fc870bc1" gracePeriod=30 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.436999 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xrjxk"] Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.440949 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xrjxk" podUID="7b9a1165-24e0-4062-b805-0f8262822507" containerName="registry-server" containerID="cri-o://70dff26f289767b3751863d9c38507087e8b580a75adbd7af49ca49b727a95a9" gracePeriod=30 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.458528 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwd8z"] Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.458974 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" podUID="562c18aa-5aed-4f1e-95f5-da1fe7c02523" containerName="marketplace-operator" containerID="cri-o://48aada40317b892d9a223a57a3ac3503ec0ff8bc3ff5df783ac9de195fd3495f" gracePeriod=30 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.470765 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-29nf4"] Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.471462 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-29nf4" podUID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerName="registry-server" containerID="cri-o://ee07414de7a83d1212fd24fac006255c845d66e5f8765acbd5026e0f77d5182b" gracePeriod=30 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.496357 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rprhz"] Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.496718 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rprhz" podUID="e45c788c-c8a0-4563-8d05-71915e390342" containerName="registry-server" containerID="cri-o://4d05b281db5317fbaf4180dd6656c44165f2aee89a9fa2e17cd24d4380132350" gracePeriod=30 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.955973 5012 generic.go:334] "Generic (PLEG): container finished" podID="562c18aa-5aed-4f1e-95f5-da1fe7c02523" containerID="48aada40317b892d9a223a57a3ac3503ec0ff8bc3ff5df783ac9de195fd3495f" exitCode=0 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.956067 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" event={"ID":"562c18aa-5aed-4f1e-95f5-da1fe7c02523","Type":"ContainerDied","Data":"48aada40317b892d9a223a57a3ac3503ec0ff8bc3ff5df783ac9de195fd3495f"} Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.956479 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" event={"ID":"562c18aa-5aed-4f1e-95f5-da1fe7c02523","Type":"ContainerDied","Data":"a4304d16005995731fefdc081d0677adb43c535c36d93bdb10216b67e4aa8631"} Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.956499 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4304d16005995731fefdc081d0677adb43c535c36d93bdb10216b67e4aa8631" Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.960889 5012 generic.go:334] "Generic (PLEG): container finished" podID="e45c788c-c8a0-4563-8d05-71915e390342" containerID="4d05b281db5317fbaf4180dd6656c44165f2aee89a9fa2e17cd24d4380132350" exitCode=0 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.961011 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rprhz" event={"ID":"e45c788c-c8a0-4563-8d05-71915e390342","Type":"ContainerDied","Data":"4d05b281db5317fbaf4180dd6656c44165f2aee89a9fa2e17cd24d4380132350"} Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.963979 5012 generic.go:334] "Generic (PLEG): container finished" podID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerID="ee07414de7a83d1212fd24fac006255c845d66e5f8765acbd5026e0f77d5182b" exitCode=0 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.964064 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29nf4" event={"ID":"185ea561-a45e-49e1-a46b-f9bf9f6d2527","Type":"ContainerDied","Data":"ee07414de7a83d1212fd24fac006255c845d66e5f8765acbd5026e0f77d5182b"} Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.964143 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-29nf4" event={"ID":"185ea561-a45e-49e1-a46b-f9bf9f6d2527","Type":"ContainerDied","Data":"d6248eb1f07ab21d429bccf4d50cb020bfc4631adebda71b1fd6e99e737ec5c4"} Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.964165 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6248eb1f07ab21d429bccf4d50cb020bfc4631adebda71b1fd6e99e737ec5c4" Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.966903 5012 generic.go:334] "Generic (PLEG): container finished" podID="7b9a1165-24e0-4062-b805-0f8262822507" containerID="70dff26f289767b3751863d9c38507087e8b580a75adbd7af49ca49b727a95a9" exitCode=0 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.967010 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrjxk" event={"ID":"7b9a1165-24e0-4062-b805-0f8262822507","Type":"ContainerDied","Data":"70dff26f289767b3751863d9c38507087e8b580a75adbd7af49ca49b727a95a9"} Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.967089 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrjxk" event={"ID":"7b9a1165-24e0-4062-b805-0f8262822507","Type":"ContainerDied","Data":"1dbae8515e388d77b201dd3b6779da7c54d4915cbd633620f81733f1a3b7142f"} Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.967125 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dbae8515e388d77b201dd3b6779da7c54d4915cbd633620f81733f1a3b7142f" Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.969794 5012 generic.go:334] "Generic (PLEG): container finished" podID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerID="bcadb8bab70733341b7bb0cee1dc27ad28111033c1f70563d157cf39fc870bc1" exitCode=0 Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.969906 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xvs8" event={"ID":"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9","Type":"ContainerDied","Data":"bcadb8bab70733341b7bb0cee1dc27ad28111033c1f70563d157cf39fc870bc1"} Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.969972 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xvs8" event={"ID":"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9","Type":"ContainerDied","Data":"b8c85544c6a863422777f31be4cc9ef9cf579d3d709dec29ffff9c467cf857f1"} Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.970005 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8c85544c6a863422777f31be4cc9ef9cf579d3d709dec29ffff9c467cf857f1" Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.972340 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 05:29:55 crc kubenswrapper[5012]: I0219 05:29:55.972386 5012 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="b6f9cb760466cabd1e0a03b9e7b38403b65eda373e574649db72eb2355616bd8" exitCode=137 Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.036732 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.053702 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.065132 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.077417 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084209 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v76p\" (UniqueName: \"kubernetes.io/projected/562c18aa-5aed-4f1e-95f5-da1fe7c02523-kube-api-access-4v76p\") pod \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084276 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-utilities\") pod \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084317 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-utilities\") pod \"7b9a1165-24e0-4062-b805-0f8262822507\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084350 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-utilities\") pod \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084414 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-catalog-content\") pod \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084443 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-trusted-ca\") pod \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084541 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plzvr\" (UniqueName: \"kubernetes.io/projected/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-kube-api-access-plzvr\") pod \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\" (UID: \"a7ce4c2b-d3b7-4881-91fe-49f7103f12b9\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084584 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz86t\" (UniqueName: \"kubernetes.io/projected/185ea561-a45e-49e1-a46b-f9bf9f6d2527-kube-api-access-fz86t\") pod \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084642 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-operator-metrics\") pod \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\" (UID: \"562c18aa-5aed-4f1e-95f5-da1fe7c02523\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084666 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-catalog-content\") pod \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\" (UID: \"185ea561-a45e-49e1-a46b-f9bf9f6d2527\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084706 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-catalog-content\") pod \"7b9a1165-24e0-4062-b805-0f8262822507\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.084725 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtwg8\" (UniqueName: \"kubernetes.io/projected/7b9a1165-24e0-4062-b805-0f8262822507-kube-api-access-gtwg8\") pod \"7b9a1165-24e0-4062-b805-0f8262822507\" (UID: \"7b9a1165-24e0-4062-b805-0f8262822507\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.086631 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "562c18aa-5aed-4f1e-95f5-da1fe7c02523" (UID: "562c18aa-5aed-4f1e-95f5-da1fe7c02523"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.088090 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-utilities" (OuterVolumeSpecName: "utilities") pod "a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" (UID: "a7ce4c2b-d3b7-4881-91fe-49f7103f12b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.088183 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-utilities" (OuterVolumeSpecName: "utilities") pod "185ea561-a45e-49e1-a46b-f9bf9f6d2527" (UID: "185ea561-a45e-49e1-a46b-f9bf9f6d2527"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.088504 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.090200 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-utilities" (OuterVolumeSpecName: "utilities") pod "7b9a1165-24e0-4062-b805-0f8262822507" (UID: "7b9a1165-24e0-4062-b805-0f8262822507"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.092616 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b9a1165-24e0-4062-b805-0f8262822507-kube-api-access-gtwg8" (OuterVolumeSpecName: "kube-api-access-gtwg8") pod "7b9a1165-24e0-4062-b805-0f8262822507" (UID: "7b9a1165-24e0-4062-b805-0f8262822507"). InnerVolumeSpecName "kube-api-access-gtwg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.093992 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/185ea561-a45e-49e1-a46b-f9bf9f6d2527-kube-api-access-fz86t" (OuterVolumeSpecName: "kube-api-access-fz86t") pod "185ea561-a45e-49e1-a46b-f9bf9f6d2527" (UID: "185ea561-a45e-49e1-a46b-f9bf9f6d2527"). InnerVolumeSpecName "kube-api-access-fz86t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.094641 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "562c18aa-5aed-4f1e-95f5-da1fe7c02523" (UID: "562c18aa-5aed-4f1e-95f5-da1fe7c02523"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.096663 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-kube-api-access-plzvr" (OuterVolumeSpecName: "kube-api-access-plzvr") pod "a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" (UID: "a7ce4c2b-d3b7-4881-91fe-49f7103f12b9"). InnerVolumeSpecName "kube-api-access-plzvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.098064 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.098162 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.101049 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/562c18aa-5aed-4f1e-95f5-da1fe7c02523-kube-api-access-4v76p" (OuterVolumeSpecName: "kube-api-access-4v76p") pod "562c18aa-5aed-4f1e-95f5-da1fe7c02523" (UID: "562c18aa-5aed-4f1e-95f5-da1fe7c02523"). InnerVolumeSpecName "kube-api-access-4v76p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.143981 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "185ea561-a45e-49e1-a46b-f9bf9f6d2527" (UID: "185ea561-a45e-49e1-a46b-f9bf9f6d2527"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.170360 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" (UID: "a7ce4c2b-d3b7-4881-91fe-49f7103f12b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.173290 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b9a1165-24e0-4062-b805-0f8262822507" (UID: "7b9a1165-24e0-4062-b805-0f8262822507"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.185928 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-utilities\") pod \"e45c788c-c8a0-4563-8d05-71915e390342\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186002 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186031 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186054 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186080 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186125 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186183 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186209 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr49f\" (UniqueName: \"kubernetes.io/projected/e45c788c-c8a0-4563-8d05-71915e390342-kube-api-access-pr49f\") pod \"e45c788c-c8a0-4563-8d05-71915e390342\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186269 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186291 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186464 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-catalog-content\") pod \"e45c788c-c8a0-4563-8d05-71915e390342\" (UID: \"e45c788c-c8a0-4563-8d05-71915e390342\") " Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.186715 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-utilities" (OuterVolumeSpecName: "utilities") pod "e45c788c-c8a0-4563-8d05-71915e390342" (UID: "e45c788c-c8a0-4563-8d05-71915e390342"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187236 5012 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187275 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plzvr\" (UniqueName: \"kubernetes.io/projected/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-kube-api-access-plzvr\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187296 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz86t\" (UniqueName: \"kubernetes.io/projected/185ea561-a45e-49e1-a46b-f9bf9f6d2527-kube-api-access-fz86t\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187340 5012 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/562c18aa-5aed-4f1e-95f5-da1fe7c02523-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187360 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187378 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187395 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtwg8\" (UniqueName: \"kubernetes.io/projected/7b9a1165-24e0-4062-b805-0f8262822507-kube-api-access-gtwg8\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187415 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187436 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v76p\" (UniqueName: \"kubernetes.io/projected/562c18aa-5aed-4f1e-95f5-da1fe7c02523-kube-api-access-4v76p\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187457 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9a1165-24e0-4062-b805-0f8262822507-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187476 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/185ea561-a45e-49e1-a46b-f9bf9f6d2527-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187495 5012 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187518 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187535 5012 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187552 5012 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187568 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.187620 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.190620 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e45c788c-c8a0-4563-8d05-71915e390342-kube-api-access-pr49f" (OuterVolumeSpecName: "kube-api-access-pr49f") pod "e45c788c-c8a0-4563-8d05-71915e390342" (UID: "e45c788c-c8a0-4563-8d05-71915e390342"). InnerVolumeSpecName "kube-api-access-pr49f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.194263 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.289139 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr49f\" (UniqueName: \"kubernetes.io/projected/e45c788c-c8a0-4563-8d05-71915e390342-kube-api-access-pr49f\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.289184 5012 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.289202 5012 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.306472 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e45c788c-c8a0-4563-8d05-71915e390342" (UID: "e45c788c-c8a0-4563-8d05-71915e390342"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.390869 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e45c788c-c8a0-4563-8d05-71915e390342-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.716639 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.717565 5012 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.735455 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.735498 5012 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="4c519278-8830-47e6-a224-0edc04b31b98" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.735532 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.735546 5012 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="4c519278-8830-47e6-a224-0edc04b31b98" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.982418 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.982604 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.983351 5012 scope.go:117] "RemoveContainer" containerID="b6f9cb760466cabd1e0a03b9e7b38403b65eda373e574649db72eb2355616bd8" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.988599 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xvs8" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.989121 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-29nf4" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.989446 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrjxk" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.989959 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rprhz" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.990156 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kwd8z" Feb 19 05:29:56 crc kubenswrapper[5012]: I0219 05:29:56.990562 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rprhz" event={"ID":"e45c788c-c8a0-4563-8d05-71915e390342","Type":"ContainerDied","Data":"330c4277adf991cb8d45015f1cf3ae0cb9906f5605d279d3a2745e3670726677"} Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.011148 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.015297 5012 scope.go:117] "RemoveContainer" containerID="4d05b281db5317fbaf4180dd6656c44165f2aee89a9fa2e17cd24d4380132350" Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.050233 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rprhz"] Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.051061 5012 scope.go:117] "RemoveContainer" containerID="842ea38ab87f30dad259cf1979c7ff921a55d4d9e323ba2c8e89f149a1596602" Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.061948 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rprhz"] Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.074486 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwd8z"] Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.081395 5012 scope.go:117] "RemoveContainer" containerID="4b13a012dcea4fefc2b4e7757fddd764d86d7e0aa4fa7cfad77d502f2efa1ea0" Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.085510 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwd8z"] Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.094902 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-29nf4"] Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.098818 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-29nf4"] Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.101847 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4xvs8"] Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.106142 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4xvs8"] Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.110287 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xrjxk"] Feb 19 05:29:57 crc kubenswrapper[5012]: I0219 05:29:57.111951 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xrjxk"] Feb 19 05:29:58 crc kubenswrapper[5012]: I0219 05:29:58.713179 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" path="/var/lib/kubelet/pods/185ea561-a45e-49e1-a46b-f9bf9f6d2527/volumes" Feb 19 05:29:58 crc kubenswrapper[5012]: I0219 05:29:58.714432 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="562c18aa-5aed-4f1e-95f5-da1fe7c02523" path="/var/lib/kubelet/pods/562c18aa-5aed-4f1e-95f5-da1fe7c02523/volumes" Feb 19 05:29:58 crc kubenswrapper[5012]: I0219 05:29:58.714877 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b9a1165-24e0-4062-b805-0f8262822507" path="/var/lib/kubelet/pods/7b9a1165-24e0-4062-b805-0f8262822507/volumes" Feb 19 05:29:58 crc kubenswrapper[5012]: I0219 05:29:58.715915 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" path="/var/lib/kubelet/pods/a7ce4c2b-d3b7-4881-91fe-49f7103f12b9/volumes" Feb 19 05:29:58 crc kubenswrapper[5012]: I0219 05:29:58.716491 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e45c788c-c8a0-4563-8d05-71915e390342" path="/var/lib/kubelet/pods/e45c788c-c8a0-4563-8d05-71915e390342/volumes" Feb 19 05:29:59 crc kubenswrapper[5012]: I0219 05:29:59.074512 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 05:30:00 crc kubenswrapper[5012]: I0219 05:30:00.144091 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 05:30:01 crc kubenswrapper[5012]: I0219 05:30:01.855208 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 05:30:03 crc kubenswrapper[5012]: I0219 05:30:03.064969 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 05:30:04 crc kubenswrapper[5012]: I0219 05:30:04.482377 5012 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 19 05:30:04 crc kubenswrapper[5012]: I0219 05:30:04.565439 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 05:30:05 crc kubenswrapper[5012]: I0219 05:30:05.998027 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 05:30:11 crc kubenswrapper[5012]: I0219 05:30:11.097428 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 19 05:30:11 crc kubenswrapper[5012]: I0219 05:30:11.100549 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 05:30:11 crc kubenswrapper[5012]: I0219 05:30:11.100603 5012 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="7d416dde1b0d46276be91907a124098c4e88b5ed6b05a4907bd5048f78aeba0e" exitCode=137 Feb 19 05:30:11 crc kubenswrapper[5012]: I0219 05:30:11.100639 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"7d416dde1b0d46276be91907a124098c4e88b5ed6b05a4907bd5048f78aeba0e"} Feb 19 05:30:11 crc kubenswrapper[5012]: I0219 05:30:11.100679 5012 scope.go:117] "RemoveContainer" containerID="e0315b0df825fc9b6a89224452e0951a77f8619237e11de16603fb644fe2c583" Feb 19 05:30:12 crc kubenswrapper[5012]: I0219 05:30:12.109931 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 19 05:30:12 crc kubenswrapper[5012]: I0219 05:30:12.112663 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"69d50f3c484f71457aead6e9fa94aac8b61c7cd6a7bc711668b6fc0f9ca49157"} Feb 19 05:30:13 crc kubenswrapper[5012]: I0219 05:30:13.637430 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 05:30:14 crc kubenswrapper[5012]: I0219 05:30:14.295174 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 05:30:17 crc kubenswrapper[5012]: I0219 05:30:17.266185 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:30:18 crc kubenswrapper[5012]: I0219 05:30:18.626861 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 05:30:20 crc kubenswrapper[5012]: I0219 05:30:20.611759 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:30:20 crc kubenswrapper[5012]: I0219 05:30:20.617491 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:30:21 crc kubenswrapper[5012]: I0219 05:30:21.170973 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 05:30:21 crc kubenswrapper[5012]: I0219 05:30:21.304853 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.399833 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jqjls"] Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400420 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerName="extract-utilities" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400432 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerName="extract-utilities" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400441 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400449 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400456 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45c788c-c8a0-4563-8d05-71915e390342" containerName="extract-utilities" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400462 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45c788c-c8a0-4563-8d05-71915e390342" containerName="extract-utilities" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400471 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400477 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400486 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerName="extract-content" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400492 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerName="extract-content" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400500 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45c788c-c8a0-4563-8d05-71915e390342" containerName="extract-content" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400505 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45c788c-c8a0-4563-8d05-71915e390342" containerName="extract-content" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400513 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="562c18aa-5aed-4f1e-95f5-da1fe7c02523" containerName="marketplace-operator" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400519 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="562c18aa-5aed-4f1e-95f5-da1fe7c02523" containerName="marketplace-operator" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400528 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9a1165-24e0-4062-b805-0f8262822507" containerName="extract-utilities" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400533 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9a1165-24e0-4062-b805-0f8262822507" containerName="extract-utilities" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400539 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9a1165-24e0-4062-b805-0f8262822507" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400545 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9a1165-24e0-4062-b805-0f8262822507" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400551 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400557 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400563 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" containerName="installer" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400569 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" containerName="installer" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400579 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9a1165-24e0-4062-b805-0f8262822507" containerName="extract-content" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400585 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9a1165-24e0-4062-b805-0f8262822507" containerName="extract-content" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400592 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerName="extract-content" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400598 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerName="extract-content" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400604 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45c788c-c8a0-4563-8d05-71915e390342" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400609 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45c788c-c8a0-4563-8d05-71915e390342" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: E0219 05:30:33.400616 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerName="extract-utilities" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400621 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerName="extract-utilities" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400698 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="185ea561-a45e-49e1-a46b-f9bf9f6d2527" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400707 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b9a1165-24e0-4062-b805-0f8262822507" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400716 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="e45c788c-c8a0-4563-8d05-71915e390342" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400723 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a60ebe63-e6e8-4716-b6a7-09471bd1761c" containerName="installer" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400730 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="562c18aa-5aed-4f1e-95f5-da1fe7c02523" containerName="marketplace-operator" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400737 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.400746 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7ce4c2b-d3b7-4881-91fe-49f7103f12b9" containerName="registry-server" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.401043 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.405846 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.406162 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.406282 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.406403 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.411494 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.474809 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jqjls"] Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.528122 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r"] Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.529990 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.532572 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.532578 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.537844 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/800f8349-6ef3-44ae-90a0-56c89ca82479-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jqjls\" (UID: \"800f8349-6ef3-44ae-90a0-56c89ca82479\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.537930 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/800f8349-6ef3-44ae-90a0-56c89ca82479-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jqjls\" (UID: \"800f8349-6ef3-44ae-90a0-56c89ca82479\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.537979 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swxjt\" (UniqueName: \"kubernetes.io/projected/800f8349-6ef3-44ae-90a0-56c89ca82479-kube-api-access-swxjt\") pod \"marketplace-operator-79b997595-jqjls\" (UID: \"800f8349-6ef3-44ae-90a0-56c89ca82479\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.544268 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r"] Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.639291 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swxjt\" (UniqueName: \"kubernetes.io/projected/800f8349-6ef3-44ae-90a0-56c89ca82479-kube-api-access-swxjt\") pod \"marketplace-operator-79b997595-jqjls\" (UID: \"800f8349-6ef3-44ae-90a0-56c89ca82479\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.639395 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62m9n\" (UniqueName: \"kubernetes.io/projected/ff63f713-7649-46d8-85cb-ef67dccf9fe6-kube-api-access-62m9n\") pod \"collect-profiles-29524650-khs5r\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.639445 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff63f713-7649-46d8-85cb-ef67dccf9fe6-config-volume\") pod \"collect-profiles-29524650-khs5r\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.639474 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/800f8349-6ef3-44ae-90a0-56c89ca82479-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jqjls\" (UID: \"800f8349-6ef3-44ae-90a0-56c89ca82479\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.639522 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff63f713-7649-46d8-85cb-ef67dccf9fe6-secret-volume\") pod \"collect-profiles-29524650-khs5r\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.639545 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/800f8349-6ef3-44ae-90a0-56c89ca82479-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jqjls\" (UID: \"800f8349-6ef3-44ae-90a0-56c89ca82479\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.640783 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/800f8349-6ef3-44ae-90a0-56c89ca82479-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jqjls\" (UID: \"800f8349-6ef3-44ae-90a0-56c89ca82479\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.645515 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/800f8349-6ef3-44ae-90a0-56c89ca82479-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jqjls\" (UID: \"800f8349-6ef3-44ae-90a0-56c89ca82479\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.657904 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swxjt\" (UniqueName: \"kubernetes.io/projected/800f8349-6ef3-44ae-90a0-56c89ca82479-kube-api-access-swxjt\") pod \"marketplace-operator-79b997595-jqjls\" (UID: \"800f8349-6ef3-44ae-90a0-56c89ca82479\") " pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.740478 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff63f713-7649-46d8-85cb-ef67dccf9fe6-secret-volume\") pod \"collect-profiles-29524650-khs5r\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.741192 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62m9n\" (UniqueName: \"kubernetes.io/projected/ff63f713-7649-46d8-85cb-ef67dccf9fe6-kube-api-access-62m9n\") pod \"collect-profiles-29524650-khs5r\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.741234 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff63f713-7649-46d8-85cb-ef67dccf9fe6-config-volume\") pod \"collect-profiles-29524650-khs5r\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.742430 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff63f713-7649-46d8-85cb-ef67dccf9fe6-config-volume\") pod \"collect-profiles-29524650-khs5r\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.744066 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff63f713-7649-46d8-85cb-ef67dccf9fe6-secret-volume\") pod \"collect-profiles-29524650-khs5r\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.750012 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.765394 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62m9n\" (UniqueName: \"kubernetes.io/projected/ff63f713-7649-46d8-85cb-ef67dccf9fe6-kube-api-access-62m9n\") pod \"collect-profiles-29524650-khs5r\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.851916 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:33 crc kubenswrapper[5012]: I0219 05:30:33.942632 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jqjls"] Feb 19 05:30:33 crc kubenswrapper[5012]: W0219 05:30:33.947207 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod800f8349_6ef3_44ae_90a0_56c89ca82479.slice/crio-0da434541d678745e494583af4b770ef7b1441fe49184eb098ec0a275ca69bde WatchSource:0}: Error finding container 0da434541d678745e494583af4b770ef7b1441fe49184eb098ec0a275ca69bde: Status 404 returned error can't find the container with id 0da434541d678745e494583af4b770ef7b1441fe49184eb098ec0a275ca69bde Feb 19 05:30:34 crc kubenswrapper[5012]: I0219 05:30:34.046797 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r"] Feb 19 05:30:34 crc kubenswrapper[5012]: W0219 05:30:34.051057 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff63f713_7649_46d8_85cb_ef67dccf9fe6.slice/crio-b0d0f7e8c8c311cb6ac2e3bbcd209640b91cc0e24038d7d2c06b81dcc3952cd4 WatchSource:0}: Error finding container b0d0f7e8c8c311cb6ac2e3bbcd209640b91cc0e24038d7d2c06b81dcc3952cd4: Status 404 returned error can't find the container with id b0d0f7e8c8c311cb6ac2e3bbcd209640b91cc0e24038d7d2c06b81dcc3952cd4 Feb 19 05:30:34 crc kubenswrapper[5012]: I0219 05:30:34.249948 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" event={"ID":"ff63f713-7649-46d8-85cb-ef67dccf9fe6","Type":"ContainerStarted","Data":"c5d7329af46ea59d345e496a5c84f8c51fab010adcb4a91e0080f58a2ca4a9ec"} Feb 19 05:30:34 crc kubenswrapper[5012]: I0219 05:30:34.250284 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" event={"ID":"ff63f713-7649-46d8-85cb-ef67dccf9fe6","Type":"ContainerStarted","Data":"b0d0f7e8c8c311cb6ac2e3bbcd209640b91cc0e24038d7d2c06b81dcc3952cd4"} Feb 19 05:30:34 crc kubenswrapper[5012]: I0219 05:30:34.251815 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" event={"ID":"800f8349-6ef3-44ae-90a0-56c89ca82479","Type":"ContainerStarted","Data":"bac282f789775200a115771164055f59f8edf0cda080bafae25efbbc31423525"} Feb 19 05:30:34 crc kubenswrapper[5012]: I0219 05:30:34.251842 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" event={"ID":"800f8349-6ef3-44ae-90a0-56c89ca82479","Type":"ContainerStarted","Data":"0da434541d678745e494583af4b770ef7b1441fe49184eb098ec0a275ca69bde"} Feb 19 05:30:34 crc kubenswrapper[5012]: I0219 05:30:34.252418 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:34 crc kubenswrapper[5012]: I0219 05:30:34.253681 5012 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jqjls container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.58:8080/healthz\": dial tcp 10.217.0.58:8080: connect: connection refused" start-of-body= Feb 19 05:30:34 crc kubenswrapper[5012]: I0219 05:30:34.253731 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" podUID="800f8349-6ef3-44ae-90a0-56c89ca82479" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.58:8080/healthz\": dial tcp 10.217.0.58:8080: connect: connection refused" Feb 19 05:30:34 crc kubenswrapper[5012]: I0219 05:30:34.276773 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" podStartSLOduration=1.2767539829999999 podStartE2EDuration="1.276753983s" podCreationTimestamp="2026-02-19 05:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:30:34.276489416 +0000 UTC m=+330.309811985" watchObservedRunningTime="2026-02-19 05:30:34.276753983 +0000 UTC m=+330.310076552" Feb 19 05:30:34 crc kubenswrapper[5012]: I0219 05:30:34.276867 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" podStartSLOduration=1.276863796 podStartE2EDuration="1.276863796s" podCreationTimestamp="2026-02-19 05:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:30:34.262138172 +0000 UTC m=+330.295460751" watchObservedRunningTime="2026-02-19 05:30:34.276863796 +0000 UTC m=+330.310186365" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.007928 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ntrlp"] Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.008175 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" podUID="7e9dd710-d0ec-443f-a081-b18c4b6abe36" containerName="controller-manager" containerID="cri-o://d41d8bd2ca6cc54e0495b26c42ee87c5303f40e928d5ca5c25add9b16457d3a2" gracePeriod=30 Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.101610 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2"] Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.101838 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" podUID="89f1d0f3-c220-4668-b822-3b20b64ebfb8" containerName="route-controller-manager" containerID="cri-o://0ee8e83714534126962abe0549581114f5bc02b2fbc1bd415c2917a0b2e51cc4" gracePeriod=30 Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.179638 5012 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-ntrlp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.179706 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" podUID="7e9dd710-d0ec-443f-a081-b18c4b6abe36" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.256466 5012 generic.go:334] "Generic (PLEG): container finished" podID="ff63f713-7649-46d8-85cb-ef67dccf9fe6" containerID="c5d7329af46ea59d345e496a5c84f8c51fab010adcb4a91e0080f58a2ca4a9ec" exitCode=0 Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.256509 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" event={"ID":"ff63f713-7649-46d8-85cb-ef67dccf9fe6","Type":"ContainerDied","Data":"c5d7329af46ea59d345e496a5c84f8c51fab010adcb4a91e0080f58a2ca4a9ec"} Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.257868 5012 generic.go:334] "Generic (PLEG): container finished" podID="7e9dd710-d0ec-443f-a081-b18c4b6abe36" containerID="d41d8bd2ca6cc54e0495b26c42ee87c5303f40e928d5ca5c25add9b16457d3a2" exitCode=0 Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.257931 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" event={"ID":"7e9dd710-d0ec-443f-a081-b18c4b6abe36","Type":"ContainerDied","Data":"d41d8bd2ca6cc54e0495b26c42ee87c5303f40e928d5ca5c25add9b16457d3a2"} Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.259723 5012 generic.go:334] "Generic (PLEG): container finished" podID="89f1d0f3-c220-4668-b822-3b20b64ebfb8" containerID="0ee8e83714534126962abe0549581114f5bc02b2fbc1bd415c2917a0b2e51cc4" exitCode=0 Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.259799 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" event={"ID":"89f1d0f3-c220-4668-b822-3b20b64ebfb8","Type":"ContainerDied","Data":"0ee8e83714534126962abe0549581114f5bc02b2fbc1bd415c2917a0b2e51cc4"} Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.288041 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jqjls" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.288160 5012 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-mn4f2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.288194 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" podUID="89f1d0f3-c220-4668-b822-3b20b64ebfb8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.462976 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.499142 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.561889 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-proxy-ca-bundles\") pod \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.561941 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e9dd710-d0ec-443f-a081-b18c4b6abe36-serving-cert\") pod \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.561986 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5vpz\" (UniqueName: \"kubernetes.io/projected/7e9dd710-d0ec-443f-a081-b18c4b6abe36-kube-api-access-q5vpz\") pod \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.562013 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-config\") pod \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.562092 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-client-ca\") pod \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\" (UID: \"7e9dd710-d0ec-443f-a081-b18c4b6abe36\") " Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.562921 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-client-ca" (OuterVolumeSpecName: "client-ca") pod "7e9dd710-d0ec-443f-a081-b18c4b6abe36" (UID: "7e9dd710-d0ec-443f-a081-b18c4b6abe36"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.563170 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7e9dd710-d0ec-443f-a081-b18c4b6abe36" (UID: "7e9dd710-d0ec-443f-a081-b18c4b6abe36"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.564411 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-config" (OuterVolumeSpecName: "config") pod "7e9dd710-d0ec-443f-a081-b18c4b6abe36" (UID: "7e9dd710-d0ec-443f-a081-b18c4b6abe36"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.568517 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e9dd710-d0ec-443f-a081-b18c4b6abe36-kube-api-access-q5vpz" (OuterVolumeSpecName: "kube-api-access-q5vpz") pod "7e9dd710-d0ec-443f-a081-b18c4b6abe36" (UID: "7e9dd710-d0ec-443f-a081-b18c4b6abe36"). InnerVolumeSpecName "kube-api-access-q5vpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.569472 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e9dd710-d0ec-443f-a081-b18c4b6abe36-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7e9dd710-d0ec-443f-a081-b18c4b6abe36" (UID: "7e9dd710-d0ec-443f-a081-b18c4b6abe36"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.663236 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89f1d0f3-c220-4668-b822-3b20b64ebfb8-serving-cert\") pod \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.663353 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgg97\" (UniqueName: \"kubernetes.io/projected/89f1d0f3-c220-4668-b822-3b20b64ebfb8-kube-api-access-fgg97\") pod \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.663397 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-client-ca\") pod \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.663424 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-config\") pod \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\" (UID: \"89f1d0f3-c220-4668-b822-3b20b64ebfb8\") " Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.663622 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5vpz\" (UniqueName: \"kubernetes.io/projected/7e9dd710-d0ec-443f-a081-b18c4b6abe36-kube-api-access-q5vpz\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.663634 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.663642 5012 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.663650 5012 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e9dd710-d0ec-443f-a081-b18c4b6abe36-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.663658 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e9dd710-d0ec-443f-a081-b18c4b6abe36-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.664712 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-client-ca" (OuterVolumeSpecName: "client-ca") pod "89f1d0f3-c220-4668-b822-3b20b64ebfb8" (UID: "89f1d0f3-c220-4668-b822-3b20b64ebfb8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.664834 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-config" (OuterVolumeSpecName: "config") pod "89f1d0f3-c220-4668-b822-3b20b64ebfb8" (UID: "89f1d0f3-c220-4668-b822-3b20b64ebfb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.667417 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f1d0f3-c220-4668-b822-3b20b64ebfb8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "89f1d0f3-c220-4668-b822-3b20b64ebfb8" (UID: "89f1d0f3-c220-4668-b822-3b20b64ebfb8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.668176 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89f1d0f3-c220-4668-b822-3b20b64ebfb8-kube-api-access-fgg97" (OuterVolumeSpecName: "kube-api-access-fgg97") pod "89f1d0f3-c220-4668-b822-3b20b64ebfb8" (UID: "89f1d0f3-c220-4668-b822-3b20b64ebfb8"). InnerVolumeSpecName "kube-api-access-fgg97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.764833 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgg97\" (UniqueName: \"kubernetes.io/projected/89f1d0f3-c220-4668-b822-3b20b64ebfb8-kube-api-access-fgg97\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.764875 5012 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.764888 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f1d0f3-c220-4668-b822-3b20b64ebfb8-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:35 crc kubenswrapper[5012]: I0219 05:30:35.764898 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89f1d0f3-c220-4668-b822-3b20b64ebfb8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.268479 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" event={"ID":"89f1d0f3-c220-4668-b822-3b20b64ebfb8","Type":"ContainerDied","Data":"5003562696efaf86d8b690a85cdcf58c161a34b94a16cc2ce64a20964ec94127"} Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.268560 5012 scope.go:117] "RemoveContainer" containerID="0ee8e83714534126962abe0549581114f5bc02b2fbc1bd415c2917a0b2e51cc4" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.268505 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.276245 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.278390 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ntrlp" event={"ID":"7e9dd710-d0ec-443f-a081-b18c4b6abe36","Type":"ContainerDied","Data":"1ee3dd9b34ee54e0754750a439b4590af9a0a688e92512f756cbea34daf382ca"} Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.298606 5012 scope.go:117] "RemoveContainer" containerID="d41d8bd2ca6cc54e0495b26c42ee87c5303f40e928d5ca5c25add9b16457d3a2" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.307543 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2"] Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.325514 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mn4f2"] Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.336581 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ntrlp"] Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.349075 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ntrlp"] Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.423273 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-678789d75d-n5n5h"] Feb 19 05:30:36 crc kubenswrapper[5012]: E0219 05:30:36.423702 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e9dd710-d0ec-443f-a081-b18c4b6abe36" containerName="controller-manager" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.423775 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e9dd710-d0ec-443f-a081-b18c4b6abe36" containerName="controller-manager" Feb 19 05:30:36 crc kubenswrapper[5012]: E0219 05:30:36.423794 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f1d0f3-c220-4668-b822-3b20b64ebfb8" containerName="route-controller-manager" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.423801 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f1d0f3-c220-4668-b822-3b20b64ebfb8" containerName="route-controller-manager" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.424017 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e9dd710-d0ec-443f-a081-b18c4b6abe36" containerName="controller-manager" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.424034 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f1d0f3-c220-4668-b822-3b20b64ebfb8" containerName="route-controller-manager" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.424731 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.429890 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.430070 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw"] Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.430358 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.430542 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.430668 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.430816 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.431332 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.434448 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.436695 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.437287 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.437779 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.437870 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.438343 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.438360 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.442879 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.450658 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw"] Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.463296 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-678789d75d-n5n5h"] Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.613411 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.623139 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd25j\" (UniqueName: \"kubernetes.io/projected/018f3b6e-7828-44c1-923e-f438710195ca-kube-api-access-gd25j\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.623207 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-config\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.623236 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-client-ca\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.623269 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-proxy-ca-bundles\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.623297 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-config\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.623349 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k845n\" (UniqueName: \"kubernetes.io/projected/5e26810d-df6b-4534-bdab-c3d121e79479-kube-api-access-k845n\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.623464 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e26810d-df6b-4534-bdab-c3d121e79479-serving-cert\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.623515 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-client-ca\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.623765 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/018f3b6e-7828-44c1-923e-f438710195ca-serving-cert\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.710068 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e9dd710-d0ec-443f-a081-b18c4b6abe36" path="/var/lib/kubelet/pods/7e9dd710-d0ec-443f-a081-b18c4b6abe36/volumes" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.710579 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89f1d0f3-c220-4668-b822-3b20b64ebfb8" path="/var/lib/kubelet/pods/89f1d0f3-c220-4668-b822-3b20b64ebfb8/volumes" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.715494 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-678789d75d-n5n5h"] Feb 19 05:30:36 crc kubenswrapper[5012]: E0219 05:30:36.715822 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-k845n proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" podUID="5e26810d-df6b-4534-bdab-c3d121e79479" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.724619 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff63f713-7649-46d8-85cb-ef67dccf9fe6-secret-volume\") pod \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.724695 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff63f713-7649-46d8-85cb-ef67dccf9fe6-config-volume\") pod \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.724759 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62m9n\" (UniqueName: \"kubernetes.io/projected/ff63f713-7649-46d8-85cb-ef67dccf9fe6-kube-api-access-62m9n\") pod \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\" (UID: \"ff63f713-7649-46d8-85cb-ef67dccf9fe6\") " Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.724879 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k845n\" (UniqueName: \"kubernetes.io/projected/5e26810d-df6b-4534-bdab-c3d121e79479-kube-api-access-k845n\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.724905 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e26810d-df6b-4534-bdab-c3d121e79479-serving-cert\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.724926 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-client-ca\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.724954 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/018f3b6e-7828-44c1-923e-f438710195ca-serving-cert\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.724997 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd25j\" (UniqueName: \"kubernetes.io/projected/018f3b6e-7828-44c1-923e-f438710195ca-kube-api-access-gd25j\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.725020 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-config\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.725042 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-client-ca\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.725061 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-proxy-ca-bundles\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.725079 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-config\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.725795 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff63f713-7649-46d8-85cb-ef67dccf9fe6-config-volume" (OuterVolumeSpecName: "config-volume") pod "ff63f713-7649-46d8-85cb-ef67dccf9fe6" (UID: "ff63f713-7649-46d8-85cb-ef67dccf9fe6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.726103 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-config\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.726653 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-client-ca\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.727228 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-config\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.727331 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-proxy-ca-bundles\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.728847 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-client-ca\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.735592 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff63f713-7649-46d8-85cb-ef67dccf9fe6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ff63f713-7649-46d8-85cb-ef67dccf9fe6" (UID: "ff63f713-7649-46d8-85cb-ef67dccf9fe6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.735620 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff63f713-7649-46d8-85cb-ef67dccf9fe6-kube-api-access-62m9n" (OuterVolumeSpecName: "kube-api-access-62m9n") pod "ff63f713-7649-46d8-85cb-ef67dccf9fe6" (UID: "ff63f713-7649-46d8-85cb-ef67dccf9fe6"). InnerVolumeSpecName "kube-api-access-62m9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.736233 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e26810d-df6b-4534-bdab-c3d121e79479-serving-cert\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.739890 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw"] Feb 19 05:30:36 crc kubenswrapper[5012]: E0219 05:30:36.740221 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-gd25j serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" podUID="018f3b6e-7828-44c1-923e-f438710195ca" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.747884 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k845n\" (UniqueName: \"kubernetes.io/projected/5e26810d-df6b-4534-bdab-c3d121e79479-kube-api-access-k845n\") pod \"controller-manager-678789d75d-n5n5h\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.751895 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd25j\" (UniqueName: \"kubernetes.io/projected/018f3b6e-7828-44c1-923e-f438710195ca-kube-api-access-gd25j\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.756603 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/018f3b6e-7828-44c1-923e-f438710195ca-serving-cert\") pod \"route-controller-manager-79bd468846-l6dcw\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.826240 5012 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff63f713-7649-46d8-85cb-ef67dccf9fe6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.827002 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62m9n\" (UniqueName: \"kubernetes.io/projected/ff63f713-7649-46d8-85cb-ef67dccf9fe6-kube-api-access-62m9n\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:36 crc kubenswrapper[5012]: I0219 05:30:36.827033 5012 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ff63f713-7649-46d8-85cb-ef67dccf9fe6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.296963 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.297597 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.299899 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.300012 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r" event={"ID":"ff63f713-7649-46d8-85cb-ef67dccf9fe6","Type":"ContainerDied","Data":"b0d0f7e8c8c311cb6ac2e3bbcd209640b91cc0e24038d7d2c06b81dcc3952cd4"} Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.300061 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0d0f7e8c8c311cb6ac2e3bbcd209640b91cc0e24038d7d2c06b81dcc3952cd4" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.313540 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.325254 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.336475 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd25j\" (UniqueName: \"kubernetes.io/projected/018f3b6e-7828-44c1-923e-f438710195ca-kube-api-access-gd25j\") pod \"018f3b6e-7828-44c1-923e-f438710195ca\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.337295 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-client-ca" (OuterVolumeSpecName: "client-ca") pod "5e26810d-df6b-4534-bdab-c3d121e79479" (UID: "5e26810d-df6b-4534-bdab-c3d121e79479"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.338780 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-client-ca\") pod \"5e26810d-df6b-4534-bdab-c3d121e79479\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.338918 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-client-ca\") pod \"018f3b6e-7828-44c1-923e-f438710195ca\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.338965 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-config\") pod \"018f3b6e-7828-44c1-923e-f438710195ca\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.339490 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-client-ca" (OuterVolumeSpecName: "client-ca") pod "018f3b6e-7828-44c1-923e-f438710195ca" (UID: "018f3b6e-7828-44c1-923e-f438710195ca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.339697 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-config" (OuterVolumeSpecName: "config") pod "018f3b6e-7828-44c1-923e-f438710195ca" (UID: "018f3b6e-7828-44c1-923e-f438710195ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.339795 5012 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.339813 5012 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.339825 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/018f3b6e-7828-44c1-923e-f438710195ca-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.344530 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/018f3b6e-7828-44c1-923e-f438710195ca-kube-api-access-gd25j" (OuterVolumeSpecName: "kube-api-access-gd25j") pod "018f3b6e-7828-44c1-923e-f438710195ca" (UID: "018f3b6e-7828-44c1-923e-f438710195ca"). InnerVolumeSpecName "kube-api-access-gd25j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.442739 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k845n\" (UniqueName: \"kubernetes.io/projected/5e26810d-df6b-4534-bdab-c3d121e79479-kube-api-access-k845n\") pod \"5e26810d-df6b-4534-bdab-c3d121e79479\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.442843 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/018f3b6e-7828-44c1-923e-f438710195ca-serving-cert\") pod \"018f3b6e-7828-44c1-923e-f438710195ca\" (UID: \"018f3b6e-7828-44c1-923e-f438710195ca\") " Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.442917 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e26810d-df6b-4534-bdab-c3d121e79479-serving-cert\") pod \"5e26810d-df6b-4534-bdab-c3d121e79479\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.442969 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-proxy-ca-bundles\") pod \"5e26810d-df6b-4534-bdab-c3d121e79479\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.443007 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-config\") pod \"5e26810d-df6b-4534-bdab-c3d121e79479\" (UID: \"5e26810d-df6b-4534-bdab-c3d121e79479\") " Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.443336 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd25j\" (UniqueName: \"kubernetes.io/projected/018f3b6e-7828-44c1-923e-f438710195ca-kube-api-access-gd25j\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.444436 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5e26810d-df6b-4534-bdab-c3d121e79479" (UID: "5e26810d-df6b-4534-bdab-c3d121e79479"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.445074 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-config" (OuterVolumeSpecName: "config") pod "5e26810d-df6b-4534-bdab-c3d121e79479" (UID: "5e26810d-df6b-4534-bdab-c3d121e79479"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.448425 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018f3b6e-7828-44c1-923e-f438710195ca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "018f3b6e-7828-44c1-923e-f438710195ca" (UID: "018f3b6e-7828-44c1-923e-f438710195ca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.449634 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e26810d-df6b-4534-bdab-c3d121e79479-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5e26810d-df6b-4534-bdab-c3d121e79479" (UID: "5e26810d-df6b-4534-bdab-c3d121e79479"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.450617 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e26810d-df6b-4534-bdab-c3d121e79479-kube-api-access-k845n" (OuterVolumeSpecName: "kube-api-access-k845n") pod "5e26810d-df6b-4534-bdab-c3d121e79479" (UID: "5e26810d-df6b-4534-bdab-c3d121e79479"). InnerVolumeSpecName "kube-api-access-k845n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.545419 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k845n\" (UniqueName: \"kubernetes.io/projected/5e26810d-df6b-4534-bdab-c3d121e79479-kube-api-access-k845n\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.545471 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/018f3b6e-7828-44c1-923e-f438710195ca-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.545488 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e26810d-df6b-4534-bdab-c3d121e79479-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.545500 5012 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:37 crc kubenswrapper[5012]: I0219 05:30:37.545516 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e26810d-df6b-4534-bdab-c3d121e79479-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.301782 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.301829 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678789d75d-n5n5h" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.349693 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-678789d75d-n5n5h"] Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.355672 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7775888cf-2hw62"] Feb 19 05:30:38 crc kubenswrapper[5012]: E0219 05:30:38.356251 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff63f713-7649-46d8-85cb-ef67dccf9fe6" containerName="collect-profiles" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.356278 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff63f713-7649-46d8-85cb-ef67dccf9fe6" containerName="collect-profiles" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.356665 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff63f713-7649-46d8-85cb-ef67dccf9fe6" containerName="collect-profiles" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.357619 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.360890 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.363796 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.364198 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.365479 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.365602 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.365710 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.384560 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.388144 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-678789d75d-n5n5h"] Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.395274 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7775888cf-2hw62"] Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.404535 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw"] Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.408973 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79bd468846-l6dcw"] Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.456967 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z72ss\" (UniqueName: \"kubernetes.io/projected/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-kube-api-access-z72ss\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.457103 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-proxy-ca-bundles\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.457145 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-client-ca\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.457173 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-serving-cert\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.457432 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-config\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.558572 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-proxy-ca-bundles\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.558622 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-client-ca\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.558646 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-serving-cert\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.558667 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-config\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.558689 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z72ss\" (UniqueName: \"kubernetes.io/projected/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-kube-api-access-z72ss\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.559575 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-proxy-ca-bundles\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.559936 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-config\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.560268 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-client-ca\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.586433 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z72ss\" (UniqueName: \"kubernetes.io/projected/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-kube-api-access-z72ss\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.600540 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-serving-cert\") pod \"controller-manager-7775888cf-2hw62\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.684081 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.717450 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="018f3b6e-7828-44c1-923e-f438710195ca" path="/var/lib/kubelet/pods/018f3b6e-7828-44c1-923e-f438710195ca/volumes" Feb 19 05:30:38 crc kubenswrapper[5012]: I0219 05:30:38.718166 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e26810d-df6b-4534-bdab-c3d121e79479" path="/var/lib/kubelet/pods/5e26810d-df6b-4534-bdab-c3d121e79479/volumes" Feb 19 05:30:39 crc kubenswrapper[5012]: I0219 05:30:39.182497 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7775888cf-2hw62"] Feb 19 05:30:39 crc kubenswrapper[5012]: I0219 05:30:39.310349 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" event={"ID":"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065","Type":"ContainerStarted","Data":"241f515f82f6fd136ce265688cc81997f56dc2b17fde97525591ae1b17f15e90"} Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.317171 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" event={"ID":"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065","Type":"ContainerStarted","Data":"c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028"} Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.317696 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.323294 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.341964 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" podStartSLOduration=4.341948837 podStartE2EDuration="4.341948837s" podCreationTimestamp="2026-02-19 05:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:30:40.341431893 +0000 UTC m=+336.374754482" watchObservedRunningTime="2026-02-19 05:30:40.341948837 +0000 UTC m=+336.375271416" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.417913 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4"] Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.418469 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.420565 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.421679 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.421720 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.421814 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.422101 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.422211 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.437113 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4"] Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.506368 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-config\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.506465 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-client-ca\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.506495 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-serving-cert\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.506528 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qr8w\" (UniqueName: \"kubernetes.io/projected/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-kube-api-access-4qr8w\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.607991 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-client-ca\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.608046 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-serving-cert\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.608087 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qr8w\" (UniqueName: \"kubernetes.io/projected/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-kube-api-access-4qr8w\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.608135 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-config\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.609203 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-client-ca\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.609325 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-config\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.623846 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-serving-cert\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.629245 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qr8w\" (UniqueName: \"kubernetes.io/projected/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-kube-api-access-4qr8w\") pod \"route-controller-manager-7785b8bc59-xpbq4\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.748549 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:40 crc kubenswrapper[5012]: I0219 05:30:40.977247 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4"] Feb 19 05:30:41 crc kubenswrapper[5012]: I0219 05:30:41.327765 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" event={"ID":"71ef05be-3ff3-4a9f-b039-19c1840d1e2b","Type":"ContainerStarted","Data":"c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956"} Feb 19 05:30:41 crc kubenswrapper[5012]: I0219 05:30:41.328255 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" event={"ID":"71ef05be-3ff3-4a9f-b039-19c1840d1e2b","Type":"ContainerStarted","Data":"dfd61f82ed50896c324968c9e67ea833599f37a8a37e450778101c6d08037e66"} Feb 19 05:30:41 crc kubenswrapper[5012]: I0219 05:30:41.329812 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:41 crc kubenswrapper[5012]: I0219 05:30:41.354009 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" podStartSLOduration=5.353991761 podStartE2EDuration="5.353991761s" podCreationTimestamp="2026-02-19 05:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:30:41.350431533 +0000 UTC m=+337.383754172" watchObservedRunningTime="2026-02-19 05:30:41.353991761 +0000 UTC m=+337.387314340" Feb 19 05:30:41 crc kubenswrapper[5012]: I0219 05:30:41.519465 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:30:44 crc kubenswrapper[5012]: I0219 05:30:44.431245 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:30:44 crc kubenswrapper[5012]: I0219 05:30:44.431694 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.575286 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cxb7f"] Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.577270 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.580182 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.598765 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cxb7f"] Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.633559 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cecb9fea-b109-4267-918f-765d774f76de-utilities\") pod \"redhat-operators-cxb7f\" (UID: \"cecb9fea-b109-4267-918f-765d774f76de\") " pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.633654 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xnhc\" (UniqueName: \"kubernetes.io/projected/cecb9fea-b109-4267-918f-765d774f76de-kube-api-access-9xnhc\") pod \"redhat-operators-cxb7f\" (UID: \"cecb9fea-b109-4267-918f-765d774f76de\") " pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.633731 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cecb9fea-b109-4267-918f-765d774f76de-catalog-content\") pod \"redhat-operators-cxb7f\" (UID: \"cecb9fea-b109-4267-918f-765d774f76de\") " pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.735101 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cecb9fea-b109-4267-918f-765d774f76de-utilities\") pod \"redhat-operators-cxb7f\" (UID: \"cecb9fea-b109-4267-918f-765d774f76de\") " pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.735145 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xnhc\" (UniqueName: \"kubernetes.io/projected/cecb9fea-b109-4267-918f-765d774f76de-kube-api-access-9xnhc\") pod \"redhat-operators-cxb7f\" (UID: \"cecb9fea-b109-4267-918f-765d774f76de\") " pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.735173 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cecb9fea-b109-4267-918f-765d774f76de-catalog-content\") pod \"redhat-operators-cxb7f\" (UID: \"cecb9fea-b109-4267-918f-765d774f76de\") " pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.735882 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cecb9fea-b109-4267-918f-765d774f76de-catalog-content\") pod \"redhat-operators-cxb7f\" (UID: \"cecb9fea-b109-4267-918f-765d774f76de\") " pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.736013 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cecb9fea-b109-4267-918f-765d774f76de-utilities\") pod \"redhat-operators-cxb7f\" (UID: \"cecb9fea-b109-4267-918f-765d774f76de\") " pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.760558 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m458l"] Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.762535 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.767076 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.776095 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xnhc\" (UniqueName: \"kubernetes.io/projected/cecb9fea-b109-4267-918f-765d774f76de-kube-api-access-9xnhc\") pod \"redhat-operators-cxb7f\" (UID: \"cecb9fea-b109-4267-918f-765d774f76de\") " pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.779417 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m458l"] Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.836391 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c19ca5-841c-4d69-b2ca-a7649d14492f-utilities\") pod \"redhat-marketplace-m458l\" (UID: \"81c19ca5-841c-4d69-b2ca-a7649d14492f\") " pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.836446 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8qlb\" (UniqueName: \"kubernetes.io/projected/81c19ca5-841c-4d69-b2ca-a7649d14492f-kube-api-access-q8qlb\") pod \"redhat-marketplace-m458l\" (UID: \"81c19ca5-841c-4d69-b2ca-a7649d14492f\") " pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.836475 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c19ca5-841c-4d69-b2ca-a7649d14492f-catalog-content\") pod \"redhat-marketplace-m458l\" (UID: \"81c19ca5-841c-4d69-b2ca-a7649d14492f\") " pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.896944 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.938180 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c19ca5-841c-4d69-b2ca-a7649d14492f-utilities\") pod \"redhat-marketplace-m458l\" (UID: \"81c19ca5-841c-4d69-b2ca-a7649d14492f\") " pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.938258 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8qlb\" (UniqueName: \"kubernetes.io/projected/81c19ca5-841c-4d69-b2ca-a7649d14492f-kube-api-access-q8qlb\") pod \"redhat-marketplace-m458l\" (UID: \"81c19ca5-841c-4d69-b2ca-a7649d14492f\") " pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.938293 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c19ca5-841c-4d69-b2ca-a7649d14492f-catalog-content\") pod \"redhat-marketplace-m458l\" (UID: \"81c19ca5-841c-4d69-b2ca-a7649d14492f\") " pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.939097 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81c19ca5-841c-4d69-b2ca-a7649d14492f-utilities\") pod \"redhat-marketplace-m458l\" (UID: \"81c19ca5-841c-4d69-b2ca-a7649d14492f\") " pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.939261 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81c19ca5-841c-4d69-b2ca-a7649d14492f-catalog-content\") pod \"redhat-marketplace-m458l\" (UID: \"81c19ca5-841c-4d69-b2ca-a7649d14492f\") " pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:03 crc kubenswrapper[5012]: I0219 05:31:03.971005 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8qlb\" (UniqueName: \"kubernetes.io/projected/81c19ca5-841c-4d69-b2ca-a7649d14492f-kube-api-access-q8qlb\") pod \"redhat-marketplace-m458l\" (UID: \"81c19ca5-841c-4d69-b2ca-a7649d14492f\") " pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:04 crc kubenswrapper[5012]: I0219 05:31:04.113340 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:04 crc kubenswrapper[5012]: I0219 05:31:04.391090 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cxb7f"] Feb 19 05:31:04 crc kubenswrapper[5012]: W0219 05:31:04.401249 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcecb9fea_b109_4267_918f_765d774f76de.slice/crio-0c2d27083f289e84547829ffafd41fc7390a2dfe1bbf4e0eef99f84fbb54839d WatchSource:0}: Error finding container 0c2d27083f289e84547829ffafd41fc7390a2dfe1bbf4e0eef99f84fbb54839d: Status 404 returned error can't find the container with id 0c2d27083f289e84547829ffafd41fc7390a2dfe1bbf4e0eef99f84fbb54839d Feb 19 05:31:04 crc kubenswrapper[5012]: I0219 05:31:04.477411 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxb7f" event={"ID":"cecb9fea-b109-4267-918f-765d774f76de","Type":"ContainerStarted","Data":"0c2d27083f289e84547829ffafd41fc7390a2dfe1bbf4e0eef99f84fbb54839d"} Feb 19 05:31:04 crc kubenswrapper[5012]: I0219 05:31:04.624511 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m458l"] Feb 19 05:31:04 crc kubenswrapper[5012]: W0219 05:31:04.642105 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81c19ca5_841c_4d69_b2ca_a7649d14492f.slice/crio-a4a2ee58a1ee0443a3bb2126af73b4500eab5e427c6c1f2da19a17691450ecab WatchSource:0}: Error finding container a4a2ee58a1ee0443a3bb2126af73b4500eab5e427c6c1f2da19a17691450ecab: Status 404 returned error can't find the container with id a4a2ee58a1ee0443a3bb2126af73b4500eab5e427c6c1f2da19a17691450ecab Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.366383 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zflwk"] Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.370038 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.373749 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.384234 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zflwk"] Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.465337 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b555779a-946d-4ad9-93a6-2b0673f81cfa-utilities\") pod \"certified-operators-zflwk\" (UID: \"b555779a-946d-4ad9-93a6-2b0673f81cfa\") " pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.465397 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9948j\" (UniqueName: \"kubernetes.io/projected/b555779a-946d-4ad9-93a6-2b0673f81cfa-kube-api-access-9948j\") pod \"certified-operators-zflwk\" (UID: \"b555779a-946d-4ad9-93a6-2b0673f81cfa\") " pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.465496 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b555779a-946d-4ad9-93a6-2b0673f81cfa-catalog-content\") pod \"certified-operators-zflwk\" (UID: \"b555779a-946d-4ad9-93a6-2b0673f81cfa\") " pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.484487 5012 generic.go:334] "Generic (PLEG): container finished" podID="81c19ca5-841c-4d69-b2ca-a7649d14492f" containerID="3bb96dc6ea9ca0e073c0895f30eb25f91844a36f786ef70b6971d9793f352f16" exitCode=0 Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.484562 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m458l" event={"ID":"81c19ca5-841c-4d69-b2ca-a7649d14492f","Type":"ContainerDied","Data":"3bb96dc6ea9ca0e073c0895f30eb25f91844a36f786ef70b6971d9793f352f16"} Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.484594 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m458l" event={"ID":"81c19ca5-841c-4d69-b2ca-a7649d14492f","Type":"ContainerStarted","Data":"a4a2ee58a1ee0443a3bb2126af73b4500eab5e427c6c1f2da19a17691450ecab"} Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.486737 5012 generic.go:334] "Generic (PLEG): container finished" podID="cecb9fea-b109-4267-918f-765d774f76de" containerID="2ec212b49789d746369dd0d46fb64e5f8a52d1f36073c17ad109017f565d5cc0" exitCode=0 Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.486765 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxb7f" event={"ID":"cecb9fea-b109-4267-918f-765d774f76de","Type":"ContainerDied","Data":"2ec212b49789d746369dd0d46fb64e5f8a52d1f36073c17ad109017f565d5cc0"} Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.567468 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b555779a-946d-4ad9-93a6-2b0673f81cfa-catalog-content\") pod \"certified-operators-zflwk\" (UID: \"b555779a-946d-4ad9-93a6-2b0673f81cfa\") " pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.567724 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b555779a-946d-4ad9-93a6-2b0673f81cfa-utilities\") pod \"certified-operators-zflwk\" (UID: \"b555779a-946d-4ad9-93a6-2b0673f81cfa\") " pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.567762 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9948j\" (UniqueName: \"kubernetes.io/projected/b555779a-946d-4ad9-93a6-2b0673f81cfa-kube-api-access-9948j\") pod \"certified-operators-zflwk\" (UID: \"b555779a-946d-4ad9-93a6-2b0673f81cfa\") " pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.568295 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b555779a-946d-4ad9-93a6-2b0673f81cfa-utilities\") pod \"certified-operators-zflwk\" (UID: \"b555779a-946d-4ad9-93a6-2b0673f81cfa\") " pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.568720 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b555779a-946d-4ad9-93a6-2b0673f81cfa-catalog-content\") pod \"certified-operators-zflwk\" (UID: \"b555779a-946d-4ad9-93a6-2b0673f81cfa\") " pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.592291 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9948j\" (UniqueName: \"kubernetes.io/projected/b555779a-946d-4ad9-93a6-2b0673f81cfa-kube-api-access-9948j\") pod \"certified-operators-zflwk\" (UID: \"b555779a-946d-4ad9-93a6-2b0673f81cfa\") " pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:05 crc kubenswrapper[5012]: I0219 05:31:05.735671 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.213619 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zflwk"] Feb 19 05:31:06 crc kubenswrapper[5012]: W0219 05:31:06.222222 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb555779a_946d_4ad9_93a6_2b0673f81cfa.slice/crio-9b4e5b05bdd827b02856a52a173b715b5cccdfa3c2d5af46010710f7d63bba3a WatchSource:0}: Error finding container 9b4e5b05bdd827b02856a52a173b715b5cccdfa3c2d5af46010710f7d63bba3a: Status 404 returned error can't find the container with id 9b4e5b05bdd827b02856a52a173b715b5cccdfa3c2d5af46010710f7d63bba3a Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.357211 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bj5sc"] Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.358434 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.360875 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.372714 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bj5sc"] Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.480378 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-catalog-content\") pod \"community-operators-bj5sc\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.480887 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqbpk\" (UniqueName: \"kubernetes.io/projected/b03ab861-19bb-4215-9b19-990a14b35367-kube-api-access-lqbpk\") pod \"community-operators-bj5sc\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.480914 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-utilities\") pod \"community-operators-bj5sc\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.505319 5012 generic.go:334] "Generic (PLEG): container finished" podID="b555779a-946d-4ad9-93a6-2b0673f81cfa" containerID="1f9856a85600d035cb8ae20af1e60c7ec749b8285e0eaf03555f8a2fcad90706" exitCode=0 Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.505416 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zflwk" event={"ID":"b555779a-946d-4ad9-93a6-2b0673f81cfa","Type":"ContainerDied","Data":"1f9856a85600d035cb8ae20af1e60c7ec749b8285e0eaf03555f8a2fcad90706"} Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.505453 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zflwk" event={"ID":"b555779a-946d-4ad9-93a6-2b0673f81cfa","Type":"ContainerStarted","Data":"9b4e5b05bdd827b02856a52a173b715b5cccdfa3c2d5af46010710f7d63bba3a"} Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.511092 5012 generic.go:334] "Generic (PLEG): container finished" podID="81c19ca5-841c-4d69-b2ca-a7649d14492f" containerID="6082ddf6d79e147f1c61622ab7264da1b8d5c390b814f06355a4ff2d1ac6b44a" exitCode=0 Feb 19 05:31:06 crc kubenswrapper[5012]: I0219 05:31:06.511156 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m458l" event={"ID":"81c19ca5-841c-4d69-b2ca-a7649d14492f","Type":"ContainerDied","Data":"6082ddf6d79e147f1c61622ab7264da1b8d5c390b814f06355a4ff2d1ac6b44a"} Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:06.583071 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-catalog-content\") pod \"community-operators-bj5sc\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:06.583156 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqbpk\" (UniqueName: \"kubernetes.io/projected/b03ab861-19bb-4215-9b19-990a14b35367-kube-api-access-lqbpk\") pod \"community-operators-bj5sc\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:06.583180 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-utilities\") pod \"community-operators-bj5sc\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:06.583808 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-utilities\") pod \"community-operators-bj5sc\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:06.584273 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-catalog-content\") pod \"community-operators-bj5sc\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:06.609976 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqbpk\" (UniqueName: \"kubernetes.io/projected/b03ab861-19bb-4215-9b19-990a14b35367-kube-api-access-lqbpk\") pod \"community-operators-bj5sc\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:06.692505 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.000728 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-m5vgr"] Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.001964 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.029713 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-m5vgr"] Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.090195 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc4vq\" (UniqueName: \"kubernetes.io/projected/5e7ced67-3fa8-4660-951b-4189c7d078c1-kube-api-access-jc4vq\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.090227 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5e7ced67-3fa8-4660-951b-4189c7d078c1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.090250 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5e7ced67-3fa8-4660-951b-4189c7d078c1-bound-sa-token\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.090286 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.090332 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e7ced67-3fa8-4660-951b-4189c7d078c1-trusted-ca\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.090711 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5e7ced67-3fa8-4660-951b-4189c7d078c1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.090736 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5e7ced67-3fa8-4660-951b-4189c7d078c1-registry-tls\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.090753 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5e7ced67-3fa8-4660-951b-4189c7d078c1-registry-certificates\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.111166 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.192590 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5e7ced67-3fa8-4660-951b-4189c7d078c1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.192653 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5e7ced67-3fa8-4660-951b-4189c7d078c1-registry-tls\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.192716 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5e7ced67-3fa8-4660-951b-4189c7d078c1-registry-certificates\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.192768 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc4vq\" (UniqueName: \"kubernetes.io/projected/5e7ced67-3fa8-4660-951b-4189c7d078c1-kube-api-access-jc4vq\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.192792 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5e7ced67-3fa8-4660-951b-4189c7d078c1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.192816 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5e7ced67-3fa8-4660-951b-4189c7d078c1-bound-sa-token\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.192858 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e7ced67-3fa8-4660-951b-4189c7d078c1-trusted-ca\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.193263 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5e7ced67-3fa8-4660-951b-4189c7d078c1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.197269 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5e7ced67-3fa8-4660-951b-4189c7d078c1-registry-certificates\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.198570 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e7ced67-3fa8-4660-951b-4189c7d078c1-trusted-ca\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.210390 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5e7ced67-3fa8-4660-951b-4189c7d078c1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.210579 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5e7ced67-3fa8-4660-951b-4189c7d078c1-registry-tls\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.214038 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5e7ced67-3fa8-4660-951b-4189c7d078c1-bound-sa-token\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.215709 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc4vq\" (UniqueName: \"kubernetes.io/projected/5e7ced67-3fa8-4660-951b-4189c7d078c1-kube-api-access-jc4vq\") pod \"image-registry-66df7c8f76-m5vgr\" (UID: \"5e7ced67-3fa8-4660-951b-4189c7d078c1\") " pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.275124 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bj5sc"] Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.334844 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.531989 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zflwk" event={"ID":"b555779a-946d-4ad9-93a6-2b0673f81cfa","Type":"ContainerStarted","Data":"18c04d3e0a556e08e145d8671024263e121d7285528693acc63a8584858c5547"} Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.548520 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m458l" event={"ID":"81c19ca5-841c-4d69-b2ca-a7649d14492f","Type":"ContainerStarted","Data":"acd1a68a3588e87125f965caeaa54907e323a070f3f5ea824ceb8312fcd4e767"} Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.558981 5012 generic.go:334] "Generic (PLEG): container finished" podID="cecb9fea-b109-4267-918f-765d774f76de" containerID="2ed7d09ee5975ad995e8e62683134569a8f178080301ed6dc1f2a8d6791a5bb2" exitCode=0 Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.559051 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxb7f" event={"ID":"cecb9fea-b109-4267-918f-765d774f76de","Type":"ContainerDied","Data":"2ed7d09ee5975ad995e8e62683134569a8f178080301ed6dc1f2a8d6791a5bb2"} Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.570423 5012 generic.go:334] "Generic (PLEG): container finished" podID="b03ab861-19bb-4215-9b19-990a14b35367" containerID="8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b" exitCode=0 Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.570478 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj5sc" event={"ID":"b03ab861-19bb-4215-9b19-990a14b35367","Type":"ContainerDied","Data":"8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b"} Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.570508 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj5sc" event={"ID":"b03ab861-19bb-4215-9b19-990a14b35367","Type":"ContainerStarted","Data":"b3f8ca73c66c4fd97d0f19be0a24c8b8a95a41c1d3401dfb594c1ddc1a916e29"} Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.576625 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m458l" podStartSLOduration=3.061825416 podStartE2EDuration="4.576609257s" podCreationTimestamp="2026-02-19 05:31:03 +0000 UTC" firstStartedPulling="2026-02-19 05:31:05.486745585 +0000 UTC m=+361.520068154" lastFinishedPulling="2026-02-19 05:31:07.001529416 +0000 UTC m=+363.034851995" observedRunningTime="2026-02-19 05:31:07.575328602 +0000 UTC m=+363.608651171" watchObservedRunningTime="2026-02-19 05:31:07.576609257 +0000 UTC m=+363.609931826" Feb 19 05:31:07 crc kubenswrapper[5012]: I0219 05:31:07.786455 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-m5vgr"] Feb 19 05:31:07 crc kubenswrapper[5012]: W0219 05:31:07.791035 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e7ced67_3fa8_4660_951b_4189c7d078c1.slice/crio-2abf1cfb5dc808ba6052f5f03399c0b57ba8d855ee77fe7089a16877e5137581 WatchSource:0}: Error finding container 2abf1cfb5dc808ba6052f5f03399c0b57ba8d855ee77fe7089a16877e5137581: Status 404 returned error can't find the container with id 2abf1cfb5dc808ba6052f5f03399c0b57ba8d855ee77fe7089a16877e5137581 Feb 19 05:31:08 crc kubenswrapper[5012]: I0219 05:31:08.577888 5012 generic.go:334] "Generic (PLEG): container finished" podID="b555779a-946d-4ad9-93a6-2b0673f81cfa" containerID="18c04d3e0a556e08e145d8671024263e121d7285528693acc63a8584858c5547" exitCode=0 Feb 19 05:31:08 crc kubenswrapper[5012]: I0219 05:31:08.577957 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zflwk" event={"ID":"b555779a-946d-4ad9-93a6-2b0673f81cfa","Type":"ContainerDied","Data":"18c04d3e0a556e08e145d8671024263e121d7285528693acc63a8584858c5547"} Feb 19 05:31:08 crc kubenswrapper[5012]: I0219 05:31:08.579420 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" event={"ID":"5e7ced67-3fa8-4660-951b-4189c7d078c1","Type":"ContainerStarted","Data":"ca838889ea01da4806d579a4d22e99aa9471a255e8c9bcb3e5b14495f10abcc4"} Feb 19 05:31:08 crc kubenswrapper[5012]: I0219 05:31:08.579457 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" event={"ID":"5e7ced67-3fa8-4660-951b-4189c7d078c1","Type":"ContainerStarted","Data":"2abf1cfb5dc808ba6052f5f03399c0b57ba8d855ee77fe7089a16877e5137581"} Feb 19 05:31:08 crc kubenswrapper[5012]: I0219 05:31:08.579664 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:08 crc kubenswrapper[5012]: I0219 05:31:08.582005 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cxb7f" event={"ID":"cecb9fea-b109-4267-918f-765d774f76de","Type":"ContainerStarted","Data":"da121bb8e64b250b4a3b9532132adb26d96746a0941a1029a305150fe510833e"} Feb 19 05:31:08 crc kubenswrapper[5012]: I0219 05:31:08.584150 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj5sc" event={"ID":"b03ab861-19bb-4215-9b19-990a14b35367","Type":"ContainerStarted","Data":"8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f"} Feb 19 05:31:08 crc kubenswrapper[5012]: I0219 05:31:08.644059 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" podStartSLOduration=2.644038991 podStartE2EDuration="2.644038991s" podCreationTimestamp="2026-02-19 05:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:31:08.641697097 +0000 UTC m=+364.675019656" watchObservedRunningTime="2026-02-19 05:31:08.644038991 +0000 UTC m=+364.677361570" Feb 19 05:31:08 crc kubenswrapper[5012]: I0219 05:31:08.662349 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cxb7f" podStartSLOduration=3.201146057 podStartE2EDuration="5.662332553s" podCreationTimestamp="2026-02-19 05:31:03 +0000 UTC" firstStartedPulling="2026-02-19 05:31:05.488716579 +0000 UTC m=+361.522039158" lastFinishedPulling="2026-02-19 05:31:07.949903075 +0000 UTC m=+363.983225654" observedRunningTime="2026-02-19 05:31:08.658844217 +0000 UTC m=+364.692166796" watchObservedRunningTime="2026-02-19 05:31:08.662332553 +0000 UTC m=+364.695655112" Feb 19 05:31:09 crc kubenswrapper[5012]: I0219 05:31:09.592681 5012 generic.go:334] "Generic (PLEG): container finished" podID="b03ab861-19bb-4215-9b19-990a14b35367" containerID="8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f" exitCode=0 Feb 19 05:31:09 crc kubenswrapper[5012]: I0219 05:31:09.592804 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj5sc" event={"ID":"b03ab861-19bb-4215-9b19-990a14b35367","Type":"ContainerDied","Data":"8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f"} Feb 19 05:31:09 crc kubenswrapper[5012]: I0219 05:31:09.597501 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zflwk" event={"ID":"b555779a-946d-4ad9-93a6-2b0673f81cfa","Type":"ContainerStarted","Data":"3900d249d58e0777de23ce8421fd67853f9df359240e7b9335c423524b4c196b"} Feb 19 05:31:09 crc kubenswrapper[5012]: I0219 05:31:09.662264 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zflwk" podStartSLOduration=2.230236528 podStartE2EDuration="4.662239984s" podCreationTimestamp="2026-02-19 05:31:05 +0000 UTC" firstStartedPulling="2026-02-19 05:31:06.516432513 +0000 UTC m=+362.549755112" lastFinishedPulling="2026-02-19 05:31:08.948435969 +0000 UTC m=+364.981758568" observedRunningTime="2026-02-19 05:31:09.651990623 +0000 UTC m=+365.685313192" watchObservedRunningTime="2026-02-19 05:31:09.662239984 +0000 UTC m=+365.695562563" Feb 19 05:31:10 crc kubenswrapper[5012]: I0219 05:31:10.606801 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj5sc" event={"ID":"b03ab861-19bb-4215-9b19-990a14b35367","Type":"ContainerStarted","Data":"ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80"} Feb 19 05:31:10 crc kubenswrapper[5012]: I0219 05:31:10.625935 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bj5sc" podStartSLOduration=2.201614118 podStartE2EDuration="4.625905372s" podCreationTimestamp="2026-02-19 05:31:06 +0000 UTC" firstStartedPulling="2026-02-19 05:31:07.574632893 +0000 UTC m=+363.607955462" lastFinishedPulling="2026-02-19 05:31:09.998924127 +0000 UTC m=+366.032246716" observedRunningTime="2026-02-19 05:31:10.624663328 +0000 UTC m=+366.657985927" watchObservedRunningTime="2026-02-19 05:31:10.625905372 +0000 UTC m=+366.659227951" Feb 19 05:31:13 crc kubenswrapper[5012]: I0219 05:31:13.898491 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:13 crc kubenswrapper[5012]: I0219 05:31:13.899123 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:14 crc kubenswrapper[5012]: I0219 05:31:14.114040 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:14 crc kubenswrapper[5012]: I0219 05:31:14.114646 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:14 crc kubenswrapper[5012]: I0219 05:31:14.202338 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:14 crc kubenswrapper[5012]: I0219 05:31:14.430507 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:31:14 crc kubenswrapper[5012]: I0219 05:31:14.430602 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:31:14 crc kubenswrapper[5012]: I0219 05:31:14.683794 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m458l" Feb 19 05:31:14 crc kubenswrapper[5012]: I0219 05:31:14.952051 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cxb7f" podUID="cecb9fea-b109-4267-918f-765d774f76de" containerName="registry-server" probeResult="failure" output=< Feb 19 05:31:14 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 05:31:14 crc kubenswrapper[5012]: > Feb 19 05:31:15 crc kubenswrapper[5012]: I0219 05:31:15.763951 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:15 crc kubenswrapper[5012]: I0219 05:31:15.771984 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:15 crc kubenswrapper[5012]: I0219 05:31:15.843893 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:16 crc kubenswrapper[5012]: I0219 05:31:16.693862 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:16 crc kubenswrapper[5012]: I0219 05:31:16.695586 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:16 crc kubenswrapper[5012]: I0219 05:31:16.702512 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zflwk" Feb 19 05:31:16 crc kubenswrapper[5012]: I0219 05:31:16.767026 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:17 crc kubenswrapper[5012]: I0219 05:31:17.714286 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bj5sc" Feb 19 05:31:23 crc kubenswrapper[5012]: I0219 05:31:23.976017 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:24 crc kubenswrapper[5012]: I0219 05:31:24.052475 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cxb7f" Feb 19 05:31:27 crc kubenswrapper[5012]: I0219 05:31:27.342183 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-m5vgr" Feb 19 05:31:27 crc kubenswrapper[5012]: I0219 05:31:27.432279 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ljzsp"] Feb 19 05:31:34 crc kubenswrapper[5012]: I0219 05:31:34.996861 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7775888cf-2hw62"] Feb 19 05:31:34 crc kubenswrapper[5012]: I0219 05:31:34.997959 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" podUID="aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" containerName="controller-manager" containerID="cri-o://c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028" gracePeriod=30 Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:34.999845 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4"] Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.000131 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" podUID="71ef05be-3ff3-4a9f-b039-19c1840d1e2b" containerName="route-controller-manager" containerID="cri-o://c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956" gracePeriod=30 Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.444178 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.456922 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.563604 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-proxy-ca-bundles\") pod \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.563716 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z72ss\" (UniqueName: \"kubernetes.io/projected/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-kube-api-access-z72ss\") pod \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.563801 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-config\") pod \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.563870 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qr8w\" (UniqueName: \"kubernetes.io/projected/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-kube-api-access-4qr8w\") pod \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.563952 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-client-ca\") pod \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.564001 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-serving-cert\") pod \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.564037 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-config\") pod \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\" (UID: \"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065\") " Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.564109 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-serving-cert\") pod \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.564177 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-client-ca\") pod \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\" (UID: \"71ef05be-3ff3-4a9f-b039-19c1840d1e2b\") " Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.565128 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-config" (OuterVolumeSpecName: "config") pod "71ef05be-3ff3-4a9f-b039-19c1840d1e2b" (UID: "71ef05be-3ff3-4a9f-b039-19c1840d1e2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.565238 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-client-ca" (OuterVolumeSpecName: "client-ca") pod "aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" (UID: "aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.565371 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-config" (OuterVolumeSpecName: "config") pod "aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" (UID: "aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.565403 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" (UID: "aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.566323 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-client-ca" (OuterVolumeSpecName: "client-ca") pod "71ef05be-3ff3-4a9f-b039-19c1840d1e2b" (UID: "71ef05be-3ff3-4a9f-b039-19c1840d1e2b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.572969 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "71ef05be-3ff3-4a9f-b039-19c1840d1e2b" (UID: "71ef05be-3ff3-4a9f-b039-19c1840d1e2b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.573393 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-kube-api-access-4qr8w" (OuterVolumeSpecName: "kube-api-access-4qr8w") pod "71ef05be-3ff3-4a9f-b039-19c1840d1e2b" (UID: "71ef05be-3ff3-4a9f-b039-19c1840d1e2b"). InnerVolumeSpecName "kube-api-access-4qr8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.573523 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-kube-api-access-z72ss" (OuterVolumeSpecName: "kube-api-access-z72ss") pod "aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" (UID: "aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065"). InnerVolumeSpecName "kube-api-access-z72ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.574406 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" (UID: "aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.665965 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.665999 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qr8w\" (UniqueName: \"kubernetes.io/projected/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-kube-api-access-4qr8w\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.666011 5012 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.666024 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.666033 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.666041 5012 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.666050 5012 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71ef05be-3ff3-4a9f-b039-19c1840d1e2b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.666059 5012 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.666068 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z72ss\" (UniqueName: \"kubernetes.io/projected/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065-kube-api-access-z72ss\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.814953 5012 generic.go:334] "Generic (PLEG): container finished" podID="aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" containerID="c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028" exitCode=0 Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.815016 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" event={"ID":"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065","Type":"ContainerDied","Data":"c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028"} Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.815046 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" event={"ID":"aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065","Type":"ContainerDied","Data":"241f515f82f6fd136ce265688cc81997f56dc2b17fde97525591ae1b17f15e90"} Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.815054 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7775888cf-2hw62" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.815063 5012 scope.go:117] "RemoveContainer" containerID="c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.819011 5012 generic.go:334] "Generic (PLEG): container finished" podID="71ef05be-3ff3-4a9f-b039-19c1840d1e2b" containerID="c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956" exitCode=0 Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.819085 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" event={"ID":"71ef05be-3ff3-4a9f-b039-19c1840d1e2b","Type":"ContainerDied","Data":"c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956"} Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.819119 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" event={"ID":"71ef05be-3ff3-4a9f-b039-19c1840d1e2b","Type":"ContainerDied","Data":"dfd61f82ed50896c324968c9e67ea833599f37a8a37e450778101c6d08037e66"} Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.819127 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.832977 5012 scope.go:117] "RemoveContainer" containerID="c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028" Feb 19 05:31:35 crc kubenswrapper[5012]: E0219 05:31:35.833642 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028\": container with ID starting with c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028 not found: ID does not exist" containerID="c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.833691 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028"} err="failed to get container status \"c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028\": rpc error: code = NotFound desc = could not find container \"c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028\": container with ID starting with c58b0ce42a8a9999fbf017a8d1d4ac7e32915ba849294e197265064360b14028 not found: ID does not exist" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.833730 5012 scope.go:117] "RemoveContainer" containerID="c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.851480 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7775888cf-2hw62"] Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.854175 5012 scope.go:117] "RemoveContainer" containerID="c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956" Feb 19 05:31:35 crc kubenswrapper[5012]: E0219 05:31:35.855211 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956\": container with ID starting with c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956 not found: ID does not exist" containerID="c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.855246 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956"} err="failed to get container status \"c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956\": rpc error: code = NotFound desc = could not find container \"c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956\": container with ID starting with c611b92b37e1a76e7eeae7bd7571c9a1a04ed68217779fea11e352137f7e5956 not found: ID does not exist" Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.862980 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7775888cf-2hw62"] Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.866787 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4"] Feb 19 05:31:35 crc kubenswrapper[5012]: I0219 05:31:35.870716 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7785b8bc59-xpbq4"] Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.462100 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58679ff78b-j5lgb"] Feb 19 05:31:36 crc kubenswrapper[5012]: E0219 05:31:36.462578 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" containerName="controller-manager" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.462599 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" containerName="controller-manager" Feb 19 05:31:36 crc kubenswrapper[5012]: E0219 05:31:36.462634 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ef05be-3ff3-4a9f-b039-19c1840d1e2b" containerName="route-controller-manager" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.462643 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ef05be-3ff3-4a9f-b039-19c1840d1e2b" containerName="route-controller-manager" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.462988 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" containerName="controller-manager" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.463029 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ef05be-3ff3-4a9f-b039-19c1840d1e2b" containerName="route-controller-manager" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.464358 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.467622 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.467884 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.469842 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs"] Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.469892 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.470192 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.471260 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.473410 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.480553 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.480924 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.481176 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.481294 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58679ff78b-j5lgb"] Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.481522 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.481843 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.481858 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.481857 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.484988 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.508219 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs"] Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.581194 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brtl6\" (UniqueName: \"kubernetes.io/projected/ed296d68-b4cc-4931-8424-35586d5d0570-kube-api-access-brtl6\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.581264 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed296d68-b4cc-4931-8424-35586d5d0570-proxy-ca-bundles\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.581297 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/414aed21-07fc-4c2c-90fb-8fc90fd728e8-client-ca\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.581364 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed296d68-b4cc-4931-8424-35586d5d0570-serving-cert\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.581399 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn8g6\" (UniqueName: \"kubernetes.io/projected/414aed21-07fc-4c2c-90fb-8fc90fd728e8-kube-api-access-zn8g6\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.581564 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414aed21-07fc-4c2c-90fb-8fc90fd728e8-config\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.581679 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed296d68-b4cc-4931-8424-35586d5d0570-client-ca\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.581851 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/414aed21-07fc-4c2c-90fb-8fc90fd728e8-serving-cert\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.582000 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed296d68-b4cc-4931-8424-35586d5d0570-config\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.682991 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed296d68-b4cc-4931-8424-35586d5d0570-serving-cert\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.683029 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn8g6\" (UniqueName: \"kubernetes.io/projected/414aed21-07fc-4c2c-90fb-8fc90fd728e8-kube-api-access-zn8g6\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.683052 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414aed21-07fc-4c2c-90fb-8fc90fd728e8-config\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.683075 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed296d68-b4cc-4931-8424-35586d5d0570-client-ca\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.683104 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/414aed21-07fc-4c2c-90fb-8fc90fd728e8-serving-cert\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.683128 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed296d68-b4cc-4931-8424-35586d5d0570-config\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.683164 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brtl6\" (UniqueName: \"kubernetes.io/projected/ed296d68-b4cc-4931-8424-35586d5d0570-kube-api-access-brtl6\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.683199 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed296d68-b4cc-4931-8424-35586d5d0570-proxy-ca-bundles\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.683217 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/414aed21-07fc-4c2c-90fb-8fc90fd728e8-client-ca\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.683958 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/414aed21-07fc-4c2c-90fb-8fc90fd728e8-client-ca\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.684942 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414aed21-07fc-4c2c-90fb-8fc90fd728e8-config\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.685899 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed296d68-b4cc-4931-8424-35586d5d0570-proxy-ca-bundles\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.686639 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed296d68-b4cc-4931-8424-35586d5d0570-serving-cert\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.686676 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed296d68-b4cc-4931-8424-35586d5d0570-client-ca\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.690231 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed296d68-b4cc-4931-8424-35586d5d0570-config\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.693698 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/414aed21-07fc-4c2c-90fb-8fc90fd728e8-serving-cert\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.709890 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71ef05be-3ff3-4a9f-b039-19c1840d1e2b" path="/var/lib/kubelet/pods/71ef05be-3ff3-4a9f-b039-19c1840d1e2b/volumes" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.711426 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065" path="/var/lib/kubelet/pods/aaecb55e-1c5b-4d0f-bd7d-4bc8ab571065/volumes" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.714392 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn8g6\" (UniqueName: \"kubernetes.io/projected/414aed21-07fc-4c2c-90fb-8fc90fd728e8-kube-api-access-zn8g6\") pod \"route-controller-manager-5b6fbcfdbf-gfpfs\" (UID: \"414aed21-07fc-4c2c-90fb-8fc90fd728e8\") " pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.721834 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brtl6\" (UniqueName: \"kubernetes.io/projected/ed296d68-b4cc-4931-8424-35586d5d0570-kube-api-access-brtl6\") pod \"controller-manager-58679ff78b-j5lgb\" (UID: \"ed296d68-b4cc-4931-8424-35586d5d0570\") " pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.810694 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:36 crc kubenswrapper[5012]: I0219 05:31:36.829327 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:37 crc kubenswrapper[5012]: I0219 05:31:37.319001 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58679ff78b-j5lgb"] Feb 19 05:31:37 crc kubenswrapper[5012]: I0219 05:31:37.392977 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs"] Feb 19 05:31:37 crc kubenswrapper[5012]: W0219 05:31:37.410399 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod414aed21_07fc_4c2c_90fb_8fc90fd728e8.slice/crio-b59addad597d805858e7df6745d7f32a6eab781ca5a89a62b0f0b5cc03a4475a WatchSource:0}: Error finding container b59addad597d805858e7df6745d7f32a6eab781ca5a89a62b0f0b5cc03a4475a: Status 404 returned error can't find the container with id b59addad597d805858e7df6745d7f32a6eab781ca5a89a62b0f0b5cc03a4475a Feb 19 05:31:37 crc kubenswrapper[5012]: I0219 05:31:37.848566 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" event={"ID":"414aed21-07fc-4c2c-90fb-8fc90fd728e8","Type":"ContainerStarted","Data":"0c1ab3d22bb836389dbf3d6be4f6993c174e6a8fc2395a45bd92a92446c80f3a"} Feb 19 05:31:37 crc kubenswrapper[5012]: I0219 05:31:37.848622 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" event={"ID":"414aed21-07fc-4c2c-90fb-8fc90fd728e8","Type":"ContainerStarted","Data":"b59addad597d805858e7df6745d7f32a6eab781ca5a89a62b0f0b5cc03a4475a"} Feb 19 05:31:37 crc kubenswrapper[5012]: I0219 05:31:37.849933 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:37 crc kubenswrapper[5012]: I0219 05:31:37.851154 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" event={"ID":"ed296d68-b4cc-4931-8424-35586d5d0570","Type":"ContainerStarted","Data":"93e92892510169b1f01d731736022db69c8066dcefaa042b93acfcae13a22a3a"} Feb 19 05:31:37 crc kubenswrapper[5012]: I0219 05:31:37.851177 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" event={"ID":"ed296d68-b4cc-4931-8424-35586d5d0570","Type":"ContainerStarted","Data":"7650b4eb8c10c3c45bd474b512b23e3b92ec9094bb525c4df15df4e57940d45d"} Feb 19 05:31:37 crc kubenswrapper[5012]: I0219 05:31:37.851753 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:37 crc kubenswrapper[5012]: I0219 05:31:37.869436 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" Feb 19 05:31:37 crc kubenswrapper[5012]: I0219 05:31:37.878178 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" podStartSLOduration=2.878163227 podStartE2EDuration="2.878163227s" podCreationTimestamp="2026-02-19 05:31:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:31:37.875211307 +0000 UTC m=+393.908533886" watchObservedRunningTime="2026-02-19 05:31:37.878163227 +0000 UTC m=+393.911485805" Feb 19 05:31:38 crc kubenswrapper[5012]: I0219 05:31:38.026850 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b6fbcfdbf-gfpfs" Feb 19 05:31:38 crc kubenswrapper[5012]: I0219 05:31:38.046249 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58679ff78b-j5lgb" podStartSLOduration=3.046222083 podStartE2EDuration="3.046222083s" podCreationTimestamp="2026-02-19 05:31:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:31:37.895869566 +0000 UTC m=+393.929192145" watchObservedRunningTime="2026-02-19 05:31:38.046222083 +0000 UTC m=+394.079544652" Feb 19 05:31:44 crc kubenswrapper[5012]: I0219 05:31:44.430489 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:31:44 crc kubenswrapper[5012]: I0219 05:31:44.431209 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:31:44 crc kubenswrapper[5012]: I0219 05:31:44.431261 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:31:44 crc kubenswrapper[5012]: I0219 05:31:44.431923 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f28c70f18d16a390f7b96cc5399b8c6c7031b7f62ee2bccc4e33b9c7c28fc6a0"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 05:31:44 crc kubenswrapper[5012]: I0219 05:31:44.431967 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://f28c70f18d16a390f7b96cc5399b8c6c7031b7f62ee2bccc4e33b9c7c28fc6a0" gracePeriod=600 Feb 19 05:31:44 crc kubenswrapper[5012]: I0219 05:31:44.922681 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="f28c70f18d16a390f7b96cc5399b8c6c7031b7f62ee2bccc4e33b9c7c28fc6a0" exitCode=0 Feb 19 05:31:44 crc kubenswrapper[5012]: I0219 05:31:44.922776 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"f28c70f18d16a390f7b96cc5399b8c6c7031b7f62ee2bccc4e33b9c7c28fc6a0"} Feb 19 05:31:44 crc kubenswrapper[5012]: I0219 05:31:44.923704 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"8431b8eb7363f7603ff116fd5d3f9ab3ed3f378fbd36db4efaaa1521cb246ddd"} Feb 19 05:31:44 crc kubenswrapper[5012]: I0219 05:31:44.923807 5012 scope.go:117] "RemoveContainer" containerID="5b6a14e6e549c883c85aaac605aa1b6ce419a791745fa265829184597b451049" Feb 19 05:31:52 crc kubenswrapper[5012]: I0219 05:31:52.496995 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" podUID="70e7a5c6-0abf-4c78-8087-958a19264b49" containerName="registry" containerID="cri-o://83d6198005201c652f989f86934dfd0087e9ca81b54e4a24ea15985ceb37c2cd" gracePeriod=30 Feb 19 05:31:52 crc kubenswrapper[5012]: I0219 05:31:52.994728 5012 generic.go:334] "Generic (PLEG): container finished" podID="70e7a5c6-0abf-4c78-8087-958a19264b49" containerID="83d6198005201c652f989f86934dfd0087e9ca81b54e4a24ea15985ceb37c2cd" exitCode=0 Feb 19 05:31:52 crc kubenswrapper[5012]: I0219 05:31:52.994861 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" event={"ID":"70e7a5c6-0abf-4c78-8087-958a19264b49","Type":"ContainerDied","Data":"83d6198005201c652f989f86934dfd0087e9ca81b54e4a24ea15985ceb37c2cd"} Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.071043 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.153243 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-certificates\") pod \"70e7a5c6-0abf-4c78-8087-958a19264b49\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.153424 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/70e7a5c6-0abf-4c78-8087-958a19264b49-installation-pull-secrets\") pod \"70e7a5c6-0abf-4c78-8087-958a19264b49\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.153505 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-tls\") pod \"70e7a5c6-0abf-4c78-8087-958a19264b49\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.153567 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/70e7a5c6-0abf-4c78-8087-958a19264b49-ca-trust-extracted\") pod \"70e7a5c6-0abf-4c78-8087-958a19264b49\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.153791 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"70e7a5c6-0abf-4c78-8087-958a19264b49\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.153851 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-bound-sa-token\") pod \"70e7a5c6-0abf-4c78-8087-958a19264b49\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.153896 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmhxd\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-kube-api-access-pmhxd\") pod \"70e7a5c6-0abf-4c78-8087-958a19264b49\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.153933 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-trusted-ca\") pod \"70e7a5c6-0abf-4c78-8087-958a19264b49\" (UID: \"70e7a5c6-0abf-4c78-8087-958a19264b49\") " Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.156173 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "70e7a5c6-0abf-4c78-8087-958a19264b49" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.156249 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "70e7a5c6-0abf-4c78-8087-958a19264b49" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.169677 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "70e7a5c6-0abf-4c78-8087-958a19264b49" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.171460 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70e7a5c6-0abf-4c78-8087-958a19264b49-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "70e7a5c6-0abf-4c78-8087-958a19264b49" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.171477 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-kube-api-access-pmhxd" (OuterVolumeSpecName: "kube-api-access-pmhxd") pod "70e7a5c6-0abf-4c78-8087-958a19264b49" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49"). InnerVolumeSpecName "kube-api-access-pmhxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.175063 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70e7a5c6-0abf-4c78-8087-958a19264b49-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "70e7a5c6-0abf-4c78-8087-958a19264b49" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.177744 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "70e7a5c6-0abf-4c78-8087-958a19264b49" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.177859 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "70e7a5c6-0abf-4c78-8087-958a19264b49" (UID: "70e7a5c6-0abf-4c78-8087-958a19264b49"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.256167 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmhxd\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-kube-api-access-pmhxd\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.256864 5012 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.256890 5012 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.256909 5012 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/70e7a5c6-0abf-4c78-8087-958a19264b49-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.256932 5012 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.256949 5012 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/70e7a5c6-0abf-4c78-8087-958a19264b49-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:53 crc kubenswrapper[5012]: I0219 05:31:53.256967 5012 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/70e7a5c6-0abf-4c78-8087-958a19264b49-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 05:31:54 crc kubenswrapper[5012]: I0219 05:31:54.006748 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" event={"ID":"70e7a5c6-0abf-4c78-8087-958a19264b49","Type":"ContainerDied","Data":"b4a4a4ebd6fc7c45c5fc88ca24394f42a5591b27d7679378f83e52a1da7bb083"} Feb 19 05:31:54 crc kubenswrapper[5012]: I0219 05:31:54.006840 5012 scope.go:117] "RemoveContainer" containerID="83d6198005201c652f989f86934dfd0087e9ca81b54e4a24ea15985ceb37c2cd" Feb 19 05:31:54 crc kubenswrapper[5012]: I0219 05:31:54.007046 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ljzsp" Feb 19 05:31:54 crc kubenswrapper[5012]: I0219 05:31:54.060585 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ljzsp"] Feb 19 05:31:54 crc kubenswrapper[5012]: I0219 05:31:54.068938 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ljzsp"] Feb 19 05:31:54 crc kubenswrapper[5012]: I0219 05:31:54.719127 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70e7a5c6-0abf-4c78-8087-958a19264b49" path="/var/lib/kubelet/pods/70e7a5c6-0abf-4c78-8087-958a19264b49/volumes" Feb 19 05:33:44 crc kubenswrapper[5012]: I0219 05:33:44.430872 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:33:44 crc kubenswrapper[5012]: I0219 05:33:44.431723 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:34:04 crc kubenswrapper[5012]: I0219 05:34:04.948798 5012 scope.go:117] "RemoveContainer" containerID="06936ac625543a23fe6a94c680d400a453b3652063590fadf1140acbd164e331" Feb 19 05:34:04 crc kubenswrapper[5012]: I0219 05:34:04.977808 5012 scope.go:117] "RemoveContainer" containerID="48aada40317b892d9a223a57a3ac3503ec0ff8bc3ff5df783ac9de195fd3495f" Feb 19 05:34:05 crc kubenswrapper[5012]: I0219 05:34:05.008715 5012 scope.go:117] "RemoveContainer" containerID="0fe51da344cbaacf6697c74dcff49e7182b9df6468c8ccbfb60f3cd9e38eda3d" Feb 19 05:34:05 crc kubenswrapper[5012]: I0219 05:34:05.032859 5012 scope.go:117] "RemoveContainer" containerID="dc1a50c23707e41d34121953c7a07c7a6d9a618fec62090df956fa84f7fc89cb" Feb 19 05:34:14 crc kubenswrapper[5012]: I0219 05:34:14.431218 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:34:14 crc kubenswrapper[5012]: I0219 05:34:14.431641 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:34:32 crc kubenswrapper[5012]: I0219 05:34:32.983642 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-w66zf"] Feb 19 05:34:32 crc kubenswrapper[5012]: E0219 05:34:32.984802 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70e7a5c6-0abf-4c78-8087-958a19264b49" containerName="registry" Feb 19 05:34:32 crc kubenswrapper[5012]: I0219 05:34:32.984832 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e7a5c6-0abf-4c78-8087-958a19264b49" containerName="registry" Feb 19 05:34:32 crc kubenswrapper[5012]: I0219 05:34:32.985095 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="70e7a5c6-0abf-4c78-8087-958a19264b49" containerName="registry" Feb 19 05:34:32 crc kubenswrapper[5012]: I0219 05:34:32.985991 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-w66zf" Feb 19 05:34:32 crc kubenswrapper[5012]: I0219 05:34:32.989229 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 19 05:34:32 crc kubenswrapper[5012]: I0219 05:34:32.989608 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 19 05:34:32 crc kubenswrapper[5012]: I0219 05:34:32.989958 5012 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-djjz2" Feb 19 05:34:32 crc kubenswrapper[5012]: I0219 05:34:32.994612 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-sq68l"] Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.003194 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-w66zf"] Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.003299 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-sq68l" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.005821 5012 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lgrng" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.017549 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-sq68l"] Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.033224 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-drndq"] Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.034280 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-drndq" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.038834 5012 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-2qrkh" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.048956 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-drndq"] Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.057094 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp74w\" (UniqueName: \"kubernetes.io/projected/3c776e3c-32bf-4f6d-89b7-75bc3e1d3e02-kube-api-access-sp74w\") pod \"cert-manager-858654f9db-sq68l\" (UID: \"3c776e3c-32bf-4f6d-89b7-75bc3e1d3e02\") " pod="cert-manager/cert-manager-858654f9db-sq68l" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.057160 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dhl6\" (UniqueName: \"kubernetes.io/projected/4b5870bd-8fb3-4eef-a893-f31ce8bb1506-kube-api-access-4dhl6\") pod \"cert-manager-cainjector-cf98fcc89-w66zf\" (UID: \"4b5870bd-8fb3-4eef-a893-f31ce8bb1506\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-w66zf" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.057242 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvl4r\" (UniqueName: \"kubernetes.io/projected/53138562-0907-4b72-b228-21ef0c561f57-kube-api-access-mvl4r\") pod \"cert-manager-webhook-687f57d79b-drndq\" (UID: \"53138562-0907-4b72-b228-21ef0c561f57\") " pod="cert-manager/cert-manager-webhook-687f57d79b-drndq" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.159288 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvl4r\" (UniqueName: \"kubernetes.io/projected/53138562-0907-4b72-b228-21ef0c561f57-kube-api-access-mvl4r\") pod \"cert-manager-webhook-687f57d79b-drndq\" (UID: \"53138562-0907-4b72-b228-21ef0c561f57\") " pod="cert-manager/cert-manager-webhook-687f57d79b-drndq" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.159812 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp74w\" (UniqueName: \"kubernetes.io/projected/3c776e3c-32bf-4f6d-89b7-75bc3e1d3e02-kube-api-access-sp74w\") pod \"cert-manager-858654f9db-sq68l\" (UID: \"3c776e3c-32bf-4f6d-89b7-75bc3e1d3e02\") " pod="cert-manager/cert-manager-858654f9db-sq68l" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.160129 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dhl6\" (UniqueName: \"kubernetes.io/projected/4b5870bd-8fb3-4eef-a893-f31ce8bb1506-kube-api-access-4dhl6\") pod \"cert-manager-cainjector-cf98fcc89-w66zf\" (UID: \"4b5870bd-8fb3-4eef-a893-f31ce8bb1506\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-w66zf" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.185525 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvl4r\" (UniqueName: \"kubernetes.io/projected/53138562-0907-4b72-b228-21ef0c561f57-kube-api-access-mvl4r\") pod \"cert-manager-webhook-687f57d79b-drndq\" (UID: \"53138562-0907-4b72-b228-21ef0c561f57\") " pod="cert-manager/cert-manager-webhook-687f57d79b-drndq" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.185631 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dhl6\" (UniqueName: \"kubernetes.io/projected/4b5870bd-8fb3-4eef-a893-f31ce8bb1506-kube-api-access-4dhl6\") pod \"cert-manager-cainjector-cf98fcc89-w66zf\" (UID: \"4b5870bd-8fb3-4eef-a893-f31ce8bb1506\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-w66zf" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.192385 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp74w\" (UniqueName: \"kubernetes.io/projected/3c776e3c-32bf-4f6d-89b7-75bc3e1d3e02-kube-api-access-sp74w\") pod \"cert-manager-858654f9db-sq68l\" (UID: \"3c776e3c-32bf-4f6d-89b7-75bc3e1d3e02\") " pod="cert-manager/cert-manager-858654f9db-sq68l" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.311431 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-w66zf" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.322708 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-sq68l" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.352246 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-drndq" Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.561884 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-sq68l"] Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.572190 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.603994 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-drndq"] Feb 19 05:34:33 crc kubenswrapper[5012]: W0219 05:34:33.606513 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53138562_0907_4b72_b228_21ef0c561f57.slice/crio-3e6a24b6c18a53eb562d357cb0bb03d93fae107003549eff7626cd6bd20d80f0 WatchSource:0}: Error finding container 3e6a24b6c18a53eb562d357cb0bb03d93fae107003549eff7626cd6bd20d80f0: Status 404 returned error can't find the container with id 3e6a24b6c18a53eb562d357cb0bb03d93fae107003549eff7626cd6bd20d80f0 Feb 19 05:34:33 crc kubenswrapper[5012]: I0219 05:34:33.726913 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-w66zf"] Feb 19 05:34:34 crc kubenswrapper[5012]: I0219 05:34:34.421900 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-w66zf" event={"ID":"4b5870bd-8fb3-4eef-a893-f31ce8bb1506","Type":"ContainerStarted","Data":"2443a21924ca9a9e9e636821e42f8ff74faeae4ba62ab4c8a14c54979eb024cc"} Feb 19 05:34:34 crc kubenswrapper[5012]: I0219 05:34:34.425262 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-sq68l" event={"ID":"3c776e3c-32bf-4f6d-89b7-75bc3e1d3e02","Type":"ContainerStarted","Data":"c81be4549ba714461e1ad842137ec2e97cb94b59a4f7124a63440efa1a2d69ca"} Feb 19 05:34:34 crc kubenswrapper[5012]: I0219 05:34:34.431721 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-drndq" event={"ID":"53138562-0907-4b72-b228-21ef0c561f57","Type":"ContainerStarted","Data":"3e6a24b6c18a53eb562d357cb0bb03d93fae107003549eff7626cd6bd20d80f0"} Feb 19 05:34:38 crc kubenswrapper[5012]: I0219 05:34:38.463464 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-drndq" event={"ID":"53138562-0907-4b72-b228-21ef0c561f57","Type":"ContainerStarted","Data":"d6ba710338d54017600bc89324ba3f087df57b4be683ec57dc23d55033488818"} Feb 19 05:34:38 crc kubenswrapper[5012]: I0219 05:34:38.464298 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-drndq" Feb 19 05:34:38 crc kubenswrapper[5012]: I0219 05:34:38.466157 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-w66zf" event={"ID":"4b5870bd-8fb3-4eef-a893-f31ce8bb1506","Type":"ContainerStarted","Data":"85baf47ad4b311ef24f933fdadb4863eea872ba69aa43e2c9aa67387a980c566"} Feb 19 05:34:38 crc kubenswrapper[5012]: I0219 05:34:38.468995 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-sq68l" event={"ID":"3c776e3c-32bf-4f6d-89b7-75bc3e1d3e02","Type":"ContainerStarted","Data":"8444265bcbc0e077e72a2cdb09628a6f8c212c892780b849c3cca39e475b1495"} Feb 19 05:34:38 crc kubenswrapper[5012]: I0219 05:34:38.518134 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-w66zf" podStartSLOduration=2.646956552 podStartE2EDuration="6.518096116s" podCreationTimestamp="2026-02-19 05:34:32 +0000 UTC" firstStartedPulling="2026-02-19 05:34:33.742087464 +0000 UTC m=+569.775410063" lastFinishedPulling="2026-02-19 05:34:37.613227018 +0000 UTC m=+573.646549627" observedRunningTime="2026-02-19 05:34:38.511432603 +0000 UTC m=+574.544755212" watchObservedRunningTime="2026-02-19 05:34:38.518096116 +0000 UTC m=+574.551418755" Feb 19 05:34:38 crc kubenswrapper[5012]: I0219 05:34:38.521006 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-drndq" podStartSLOduration=2.481222108 podStartE2EDuration="6.520985116s" podCreationTimestamp="2026-02-19 05:34:32 +0000 UTC" firstStartedPulling="2026-02-19 05:34:33.608759211 +0000 UTC m=+569.642081780" lastFinishedPulling="2026-02-19 05:34:37.648522179 +0000 UTC m=+573.681844788" observedRunningTime="2026-02-19 05:34:38.493701131 +0000 UTC m=+574.527023760" watchObservedRunningTime="2026-02-19 05:34:38.520985116 +0000 UTC m=+574.554307725" Feb 19 05:34:38 crc kubenswrapper[5012]: I0219 05:34:38.542617 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-sq68l" podStartSLOduration=2.484787806 podStartE2EDuration="6.542584613s" podCreationTimestamp="2026-02-19 05:34:32 +0000 UTC" firstStartedPulling="2026-02-19 05:34:33.57181601 +0000 UTC m=+569.605138589" lastFinishedPulling="2026-02-19 05:34:37.629612777 +0000 UTC m=+573.662935396" observedRunningTime="2026-02-19 05:34:38.533390089 +0000 UTC m=+574.566712698" watchObservedRunningTime="2026-02-19 05:34:38.542584613 +0000 UTC m=+574.575907212" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.364158 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-drndq" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.387473 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8ff9w"] Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.390555 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="nbdb" containerID="cri-o://0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8" gracePeriod=30 Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.390462 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="sbdb" containerID="cri-o://99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d" gracePeriod=30 Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.390862 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4" gracePeriod=30 Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.390934 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="kube-rbac-proxy-node" containerID="cri-o://c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0" gracePeriod=30 Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.390955 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovn-acl-logging" containerID="cri-o://ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771" gracePeriod=30 Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.390823 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="northd" containerID="cri-o://9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7" gracePeriod=30 Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.396582 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovn-controller" containerID="cri-o://b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6" gracePeriod=30 Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.456922 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" containerID="cri-o://92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb" gracePeriod=30 Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.509391 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lkrsg_e7a04e36-fbaa-4de1-871a-7225433eebb0/kube-multus/2.log" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.509715 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lkrsg_e7a04e36-fbaa-4de1-871a-7225433eebb0/kube-multus/1.log" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.509745 5012 generic.go:334] "Generic (PLEG): container finished" podID="e7a04e36-fbaa-4de1-871a-7225433eebb0" containerID="9dee99959c58361002b098beb811940fb74ac9f7c81b432ebe5142128b4aec05" exitCode=2 Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.509771 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lkrsg" event={"ID":"e7a04e36-fbaa-4de1-871a-7225433eebb0","Type":"ContainerDied","Data":"9dee99959c58361002b098beb811940fb74ac9f7c81b432ebe5142128b4aec05"} Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.509802 5012 scope.go:117] "RemoveContainer" containerID="fdb6ef53c73600e1d887d2dd404a2752f35a5c3db1e4298b7cecdb101087ddbd" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.510232 5012 scope.go:117] "RemoveContainer" containerID="9dee99959c58361002b098beb811940fb74ac9f7c81b432ebe5142128b4aec05" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.510399 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lkrsg_openshift-multus(e7a04e36-fbaa-4de1-871a-7225433eebb0)\"" pod="openshift-multus/multus-lkrsg" podUID="e7a04e36-fbaa-4de1-871a-7225433eebb0" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.785477 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/3.log" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.788294 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovn-acl-logging/0.log" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.789009 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovn-controller/0.log" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.789483 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.837730 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-node-log\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.837830 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-slash\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.837901 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-var-lib-cni-networks-ovn-kubernetes\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838012 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-bin\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838124 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-systemd-units\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838215 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-config\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838342 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-netns\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838383 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-openvswitch\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838457 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovn-node-metrics-cert\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838447 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838498 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-node-log" (OuterVolumeSpecName: "node-log") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838539 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj2rz\" (UniqueName: \"kubernetes.io/projected/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-kube-api-access-sj2rz\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838553 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-slash" (OuterVolumeSpecName: "host-slash") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838569 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-systemd\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838609 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838650 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-kubelet\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838661 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838719 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-ovn-kubernetes\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838819 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-ovn\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838894 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-netd\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838943 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-var-lib-openvswitch\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.838997 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839028 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-script-lib\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839121 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-env-overrides\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839198 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-etc-openvswitch\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839281 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-log-socket\") pod \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\" (UID: \"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462\") " Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839840 5012 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-node-log\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839930 5012 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-slash\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839960 5012 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839979 5012 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839997 5012 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.840057 5012 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839038 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839062 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.841060 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.839086 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.840119 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-log-socket" (OuterVolumeSpecName: "log-socket") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.840149 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.840246 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.840273 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.840560 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.840814 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.841435 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.846201 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-kube-api-access-sj2rz" (OuterVolumeSpecName: "kube-api-access-sj2rz") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "kube-api-access-sj2rz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.847618 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.862663 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" (UID: "0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865060 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6qzx6"] Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865276 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="sbdb" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865287 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="sbdb" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865295 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865317 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865325 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovn-acl-logging" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865331 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovn-acl-logging" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865341 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865346 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865353 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="nbdb" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865360 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="nbdb" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865368 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="northd" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865374 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="northd" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865383 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovn-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865388 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovn-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865395 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865401 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865408 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="kube-rbac-proxy-node" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865414 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="kube-rbac-proxy-node" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865423 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865428 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865437 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865443 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865452 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="kubecfg-setup" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865458 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="kubecfg-setup" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865540 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865548 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="kube-rbac-proxy-node" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865557 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovn-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865563 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865576 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="sbdb" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865595 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865604 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865611 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865618 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="northd" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865627 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovn-acl-logging" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865635 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="nbdb" Feb 19 05:34:43 crc kubenswrapper[5012]: E0219 05:34:43.865919 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.865925 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.866007 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerName="ovnkube-controller" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.868691 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.940756 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-run-ovn-kubernetes\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.940927 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-etc-openvswitch\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.940991 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941029 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-slash\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941102 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-var-lib-openvswitch\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941164 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ec62621-f2fc-41ba-b7d6-9a19035ca269-ovn-node-metrics-cert\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941263 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-run-ovn\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941339 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-run-netns\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941439 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-systemd-units\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941491 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-cni-netd\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941561 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-run-systemd\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941637 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb25b\" (UniqueName: \"kubernetes.io/projected/6ec62621-f2fc-41ba-b7d6-9a19035ca269-kube-api-access-lb25b\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941784 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-log-socket\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941888 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-kubelet\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.941971 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-node-log\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942036 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-cni-bin\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942121 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ec62621-f2fc-41ba-b7d6-9a19035ca269-ovnkube-config\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942400 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ec62621-f2fc-41ba-b7d6-9a19035ca269-ovnkube-script-lib\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942514 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-run-openvswitch\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942568 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ec62621-f2fc-41ba-b7d6-9a19035ca269-env-overrides\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942760 5012 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942803 5012 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942824 5012 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942848 5012 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942874 5012 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942897 5012 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942919 5012 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942942 5012 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.942963 5012 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-log-socket\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.943052 5012 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.943072 5012 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.943089 5012 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.943105 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj2rz\" (UniqueName: \"kubernetes.io/projected/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-kube-api-access-sj2rz\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:43 crc kubenswrapper[5012]: I0219 05:34:43.943122 5012 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.044838 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-kubelet\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.044913 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-node-log\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.044938 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-cni-bin\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.044972 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ec62621-f2fc-41ba-b7d6-9a19035ca269-ovnkube-config\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.044999 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ec62621-f2fc-41ba-b7d6-9a19035ca269-ovnkube-script-lib\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045022 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-run-openvswitch\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045030 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-kubelet\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045114 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-node-log\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045177 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-cni-bin\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045042 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ec62621-f2fc-41ba-b7d6-9a19035ca269-env-overrides\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045708 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-run-ovn-kubernetes\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045825 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-etc-openvswitch\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045746 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-run-ovn-kubernetes\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045958 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-etc-openvswitch\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045698 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-run-openvswitch\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.045925 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046063 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-slash\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046117 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-var-lib-openvswitch\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046128 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ec62621-f2fc-41ba-b7d6-9a19035ca269-env-overrides\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046156 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ec62621-f2fc-41ba-b7d6-9a19035ca269-ovn-node-metrics-cert\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046137 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ec62621-f2fc-41ba-b7d6-9a19035ca269-ovnkube-config\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046175 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-slash\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046198 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-run-ovn\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046165 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-var-lib-openvswitch\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046290 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-run-netns\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046344 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-run-netns\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046265 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-run-ovn\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046378 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-systemd-units\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046433 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-cni-netd\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046446 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-systemd-units\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046498 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-run-systemd\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046499 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-cni-netd\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046534 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-run-systemd\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046541 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb25b\" (UniqueName: \"kubernetes.io/projected/6ec62621-f2fc-41ba-b7d6-9a19035ca269-kube-api-access-lb25b\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046602 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-log-socket\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046687 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-log-socket\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.046799 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ec62621-f2fc-41ba-b7d6-9a19035ca269-ovnkube-script-lib\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.047001 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ec62621-f2fc-41ba-b7d6-9a19035ca269-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.049423 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ec62621-f2fc-41ba-b7d6-9a19035ca269-ovn-node-metrics-cert\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.069726 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb25b\" (UniqueName: \"kubernetes.io/projected/6ec62621-f2fc-41ba-b7d6-9a19035ca269-kube-api-access-lb25b\") pod \"ovnkube-node-6qzx6\" (UID: \"6ec62621-f2fc-41ba-b7d6-9a19035ca269\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.228443 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.430876 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.430953 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.431012 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.431807 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8431b8eb7363f7603ff116fd5d3f9ab3ed3f378fbd36db4efaaa1521cb246ddd"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.431895 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://8431b8eb7363f7603ff116fd5d3f9ab3ed3f378fbd36db4efaaa1521cb246ddd" gracePeriod=600 Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.519281 5012 generic.go:334] "Generic (PLEG): container finished" podID="6ec62621-f2fc-41ba-b7d6-9a19035ca269" containerID="a4f9d7bb55f0f3df054c5cd07c7f254b44e014e914d904ca33bd67f4a3dd4a9c" exitCode=0 Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.519380 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" event={"ID":"6ec62621-f2fc-41ba-b7d6-9a19035ca269","Type":"ContainerDied","Data":"a4f9d7bb55f0f3df054c5cd07c7f254b44e014e914d904ca33bd67f4a3dd4a9c"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.519411 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" event={"ID":"6ec62621-f2fc-41ba-b7d6-9a19035ca269","Type":"ContainerStarted","Data":"4cb6291a83582275e968fc77dddad00d94f4693222836996f1df9d2be754f112"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.522189 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lkrsg_e7a04e36-fbaa-4de1-871a-7225433eebb0/kube-multus/2.log" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.528461 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovnkube-controller/3.log" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.530175 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovn-acl-logging/0.log" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.530609 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8ff9w_0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/ovn-controller/0.log" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.530941 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb" exitCode=0 Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.530961 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d" exitCode=0 Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.530969 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8" exitCode=0 Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.530975 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7" exitCode=0 Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.530983 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4" exitCode=0 Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.530991 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0" exitCode=0 Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.530999 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771" exitCode=143 Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531005 5012 generic.go:334] "Generic (PLEG): container finished" podID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" containerID="b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6" exitCode=143 Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531021 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531043 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531053 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531061 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531070 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531078 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531088 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531096 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531101 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531106 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531111 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531116 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531149 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531155 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531161 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531169 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531179 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531187 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531193 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531200 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531207 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531214 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531220 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531226 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531231 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531239 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531248 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531257 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531263 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531269 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531275 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531280 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531285 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531290 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531295 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531314 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531319 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531326 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" event={"ID":"0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462","Type":"ContainerDied","Data":"6412d35e0c37d9d105ee4ca82031f54078f7add4cd5d9abd98a4a8c14bd96adb"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531333 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531339 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531344 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531350 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531354 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531360 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531365 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531370 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531375 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531380 5012 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c"} Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531393 5012 scope.go:117] "RemoveContainer" containerID="92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.531489 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8ff9w" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.620177 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.661778 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8ff9w"] Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.665312 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8ff9w"] Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.666857 5012 scope.go:117] "RemoveContainer" containerID="99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.686056 5012 scope.go:117] "RemoveContainer" containerID="0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.711192 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462" path="/var/lib/kubelet/pods/0c8e7ec6-86f9-4ffb-a32e-8f88e2eb8462/volumes" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.721666 5012 scope.go:117] "RemoveContainer" containerID="9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.754603 5012 scope.go:117] "RemoveContainer" containerID="988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.773339 5012 scope.go:117] "RemoveContainer" containerID="c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.802748 5012 scope.go:117] "RemoveContainer" containerID="ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.820473 5012 scope.go:117] "RemoveContainer" containerID="b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.843758 5012 scope.go:117] "RemoveContainer" containerID="e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.890316 5012 scope.go:117] "RemoveContainer" containerID="92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb" Feb 19 05:34:44 crc kubenswrapper[5012]: E0219 05:34:44.890844 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": container with ID starting with 92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb not found: ID does not exist" containerID="92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.890874 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb"} err="failed to get container status \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": rpc error: code = NotFound desc = could not find container \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": container with ID starting with 92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.890894 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:34:44 crc kubenswrapper[5012]: E0219 05:34:44.891437 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\": container with ID starting with b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f not found: ID does not exist" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.891480 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f"} err="failed to get container status \"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\": rpc error: code = NotFound desc = could not find container \"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\": container with ID starting with b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.891507 5012 scope.go:117] "RemoveContainer" containerID="99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d" Feb 19 05:34:44 crc kubenswrapper[5012]: E0219 05:34:44.891901 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\": container with ID starting with 99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d not found: ID does not exist" containerID="99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.891927 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d"} err="failed to get container status \"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\": rpc error: code = NotFound desc = could not find container \"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\": container with ID starting with 99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.891942 5012 scope.go:117] "RemoveContainer" containerID="0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8" Feb 19 05:34:44 crc kubenswrapper[5012]: E0219 05:34:44.892462 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\": container with ID starting with 0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8 not found: ID does not exist" containerID="0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.892494 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8"} err="failed to get container status \"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\": rpc error: code = NotFound desc = could not find container \"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\": container with ID starting with 0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.892507 5012 scope.go:117] "RemoveContainer" containerID="9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7" Feb 19 05:34:44 crc kubenswrapper[5012]: E0219 05:34:44.892798 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\": container with ID starting with 9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7 not found: ID does not exist" containerID="9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.892817 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7"} err="failed to get container status \"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\": rpc error: code = NotFound desc = could not find container \"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\": container with ID starting with 9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.892828 5012 scope.go:117] "RemoveContainer" containerID="988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4" Feb 19 05:34:44 crc kubenswrapper[5012]: E0219 05:34:44.893115 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\": container with ID starting with 988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4 not found: ID does not exist" containerID="988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.893134 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4"} err="failed to get container status \"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\": rpc error: code = NotFound desc = could not find container \"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\": container with ID starting with 988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.893145 5012 scope.go:117] "RemoveContainer" containerID="c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0" Feb 19 05:34:44 crc kubenswrapper[5012]: E0219 05:34:44.893469 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\": container with ID starting with c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0 not found: ID does not exist" containerID="c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.893489 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0"} err="failed to get container status \"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\": rpc error: code = NotFound desc = could not find container \"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\": container with ID starting with c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.893501 5012 scope.go:117] "RemoveContainer" containerID="ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771" Feb 19 05:34:44 crc kubenswrapper[5012]: E0219 05:34:44.893829 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\": container with ID starting with ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771 not found: ID does not exist" containerID="ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.893854 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771"} err="failed to get container status \"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\": rpc error: code = NotFound desc = could not find container \"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\": container with ID starting with ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.893869 5012 scope.go:117] "RemoveContainer" containerID="b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6" Feb 19 05:34:44 crc kubenswrapper[5012]: E0219 05:34:44.894222 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\": container with ID starting with b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6 not found: ID does not exist" containerID="b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.894269 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6"} err="failed to get container status \"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\": rpc error: code = NotFound desc = could not find container \"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\": container with ID starting with b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.894332 5012 scope.go:117] "RemoveContainer" containerID="e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c" Feb 19 05:34:44 crc kubenswrapper[5012]: E0219 05:34:44.894730 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\": container with ID starting with e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c not found: ID does not exist" containerID="e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.894832 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c"} err="failed to get container status \"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\": rpc error: code = NotFound desc = could not find container \"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\": container with ID starting with e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.894857 5012 scope.go:117] "RemoveContainer" containerID="92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.895269 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb"} err="failed to get container status \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": rpc error: code = NotFound desc = could not find container \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": container with ID starting with 92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.895388 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.895733 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f"} err="failed to get container status \"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\": rpc error: code = NotFound desc = could not find container \"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\": container with ID starting with b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.895782 5012 scope.go:117] "RemoveContainer" containerID="99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.896380 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d"} err="failed to get container status \"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\": rpc error: code = NotFound desc = could not find container \"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\": container with ID starting with 99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.896416 5012 scope.go:117] "RemoveContainer" containerID="0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.897710 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8"} err="failed to get container status \"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\": rpc error: code = NotFound desc = could not find container \"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\": container with ID starting with 0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.897747 5012 scope.go:117] "RemoveContainer" containerID="9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.898139 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7"} err="failed to get container status \"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\": rpc error: code = NotFound desc = could not find container \"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\": container with ID starting with 9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.898171 5012 scope.go:117] "RemoveContainer" containerID="988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.899200 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4"} err="failed to get container status \"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\": rpc error: code = NotFound desc = could not find container \"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\": container with ID starting with 988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.899237 5012 scope.go:117] "RemoveContainer" containerID="c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.899637 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0"} err="failed to get container status \"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\": rpc error: code = NotFound desc = could not find container \"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\": container with ID starting with c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.899673 5012 scope.go:117] "RemoveContainer" containerID="ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.900187 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771"} err="failed to get container status \"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\": rpc error: code = NotFound desc = could not find container \"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\": container with ID starting with ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.900225 5012 scope.go:117] "RemoveContainer" containerID="b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.900824 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6"} err="failed to get container status \"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\": rpc error: code = NotFound desc = could not find container \"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\": container with ID starting with b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.900859 5012 scope.go:117] "RemoveContainer" containerID="e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.901597 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c"} err="failed to get container status \"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\": rpc error: code = NotFound desc = could not find container \"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\": container with ID starting with e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.901634 5012 scope.go:117] "RemoveContainer" containerID="92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.901963 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb"} err="failed to get container status \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": rpc error: code = NotFound desc = could not find container \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": container with ID starting with 92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.901998 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.902607 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f"} err="failed to get container status \"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\": rpc error: code = NotFound desc = could not find container \"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\": container with ID starting with b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.902642 5012 scope.go:117] "RemoveContainer" containerID="99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.903031 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d"} err="failed to get container status \"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\": rpc error: code = NotFound desc = could not find container \"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\": container with ID starting with 99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.903068 5012 scope.go:117] "RemoveContainer" containerID="0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.903576 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8"} err="failed to get container status \"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\": rpc error: code = NotFound desc = could not find container \"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\": container with ID starting with 0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.903609 5012 scope.go:117] "RemoveContainer" containerID="9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.904163 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7"} err="failed to get container status \"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\": rpc error: code = NotFound desc = could not find container \"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\": container with ID starting with 9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.904196 5012 scope.go:117] "RemoveContainer" containerID="988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.904605 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4"} err="failed to get container status \"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\": rpc error: code = NotFound desc = could not find container \"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\": container with ID starting with 988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.904638 5012 scope.go:117] "RemoveContainer" containerID="c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.905143 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0"} err="failed to get container status \"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\": rpc error: code = NotFound desc = could not find container \"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\": container with ID starting with c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.905176 5012 scope.go:117] "RemoveContainer" containerID="ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.905660 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771"} err="failed to get container status \"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\": rpc error: code = NotFound desc = could not find container \"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\": container with ID starting with ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.905683 5012 scope.go:117] "RemoveContainer" containerID="b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.905961 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6"} err="failed to get container status \"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\": rpc error: code = NotFound desc = could not find container \"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\": container with ID starting with b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.905985 5012 scope.go:117] "RemoveContainer" containerID="e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.906398 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c"} err="failed to get container status \"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\": rpc error: code = NotFound desc = could not find container \"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\": container with ID starting with e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.906421 5012 scope.go:117] "RemoveContainer" containerID="92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.906726 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb"} err="failed to get container status \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": rpc error: code = NotFound desc = could not find container \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": container with ID starting with 92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.906749 5012 scope.go:117] "RemoveContainer" containerID="b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.907154 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f"} err="failed to get container status \"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\": rpc error: code = NotFound desc = could not find container \"b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f\": container with ID starting with b866f96539fa44bbe2326f49e84b2b5cbf7a58f48072f04ec41559ffded8785f not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.907179 5012 scope.go:117] "RemoveContainer" containerID="99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.907681 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d"} err="failed to get container status \"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\": rpc error: code = NotFound desc = could not find container \"99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d\": container with ID starting with 99be1aa3c4be9dbc4c43a87848f44170b61315b80ac7a5b9bc26fc1a1a76bf7d not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.907705 5012 scope.go:117] "RemoveContainer" containerID="0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.908027 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8"} err="failed to get container status \"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\": rpc error: code = NotFound desc = could not find container \"0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8\": container with ID starting with 0dc0f92d9bb8b4b1955d74b3f935f14170d8cfe4e601dd37417172998d490dc8 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.908051 5012 scope.go:117] "RemoveContainer" containerID="9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.908530 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7"} err="failed to get container status \"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\": rpc error: code = NotFound desc = could not find container \"9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7\": container with ID starting with 9db4527a05bf267dbe92633fce8532a88d6ce9f2ecedd34f95129af81715bae7 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.908556 5012 scope.go:117] "RemoveContainer" containerID="988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.909720 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4"} err="failed to get container status \"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\": rpc error: code = NotFound desc = could not find container \"988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4\": container with ID starting with 988883212a2da51990dca1bcb57b3dcc426de2d0b170f4fea18c1b10ffed93c4 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.909747 5012 scope.go:117] "RemoveContainer" containerID="c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.911626 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0"} err="failed to get container status \"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\": rpc error: code = NotFound desc = could not find container \"c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0\": container with ID starting with c458bd3d1ff982a30c9ad87c638a6f69cf15401b60b1f6fcaca3b22e11eaf3a0 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.911691 5012 scope.go:117] "RemoveContainer" containerID="ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.912210 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771"} err="failed to get container status \"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\": rpc error: code = NotFound desc = could not find container \"ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771\": container with ID starting with ef78bf45915fb4d9c865148c9a3cd1e32c55d572d018357bb6fe2c5a1a820771 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.912232 5012 scope.go:117] "RemoveContainer" containerID="b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.912795 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6"} err="failed to get container status \"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\": rpc error: code = NotFound desc = could not find container \"b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6\": container with ID starting with b215bbba5b44150f162d59899023dfaa424560bd8cdfd01e8d64716830bae6d6 not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.912825 5012 scope.go:117] "RemoveContainer" containerID="e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.913210 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c"} err="failed to get container status \"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\": rpc error: code = NotFound desc = could not find container \"e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c\": container with ID starting with e52eb4bc38904f4766a308824c9181bcf326002d09cdb694cd084465cd98d63c not found: ID does not exist" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.913272 5012 scope.go:117] "RemoveContainer" containerID="92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb" Feb 19 05:34:44 crc kubenswrapper[5012]: I0219 05:34:44.913673 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb"} err="failed to get container status \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": rpc error: code = NotFound desc = could not find container \"92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb\": container with ID starting with 92666877fa9c1022e7359bf0939c0fb6510d5b669c0d3e0ecd04834c9269a3fb not found: ID does not exist" Feb 19 05:34:45 crc kubenswrapper[5012]: I0219 05:34:45.555843 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" event={"ID":"6ec62621-f2fc-41ba-b7d6-9a19035ca269","Type":"ContainerStarted","Data":"83b227eb429e2994b96c5bdff9ce49cbc53ebb95afb0ef4ccd253f72f62a5d1d"} Feb 19 05:34:45 crc kubenswrapper[5012]: I0219 05:34:45.556198 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" event={"ID":"6ec62621-f2fc-41ba-b7d6-9a19035ca269","Type":"ContainerStarted","Data":"2f9be6835fcf01065e2130c3e3487efb2ef54eee456cd2bd27b15435706ccde1"} Feb 19 05:34:45 crc kubenswrapper[5012]: I0219 05:34:45.556501 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" event={"ID":"6ec62621-f2fc-41ba-b7d6-9a19035ca269","Type":"ContainerStarted","Data":"b93bc1b9b7f81328c8872f144656ad31f1788dc7f59cf686cac4e0ec3a4842f7"} Feb 19 05:34:45 crc kubenswrapper[5012]: I0219 05:34:45.556521 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" event={"ID":"6ec62621-f2fc-41ba-b7d6-9a19035ca269","Type":"ContainerStarted","Data":"e6bc3a7a41ac63fd15ffe32ba2ce551e4737100351913913b47699bdb695871d"} Feb 19 05:34:45 crc kubenswrapper[5012]: I0219 05:34:45.556537 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" event={"ID":"6ec62621-f2fc-41ba-b7d6-9a19035ca269","Type":"ContainerStarted","Data":"b37addd1b4f359e4bb39d9bdd286dc3bfa8faee43aa3998695001760be3c6db8"} Feb 19 05:34:45 crc kubenswrapper[5012]: I0219 05:34:45.556553 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" event={"ID":"6ec62621-f2fc-41ba-b7d6-9a19035ca269","Type":"ContainerStarted","Data":"596417a17e986c10a3d00c55f45c692c750dcabb022bc7d916d885bff4108ea8"} Feb 19 05:34:45 crc kubenswrapper[5012]: I0219 05:34:45.560208 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="8431b8eb7363f7603ff116fd5d3f9ab3ed3f378fbd36db4efaaa1521cb246ddd" exitCode=0 Feb 19 05:34:45 crc kubenswrapper[5012]: I0219 05:34:45.560349 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"8431b8eb7363f7603ff116fd5d3f9ab3ed3f378fbd36db4efaaa1521cb246ddd"} Feb 19 05:34:45 crc kubenswrapper[5012]: I0219 05:34:45.560418 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"2fa30f17f6fec33303fdb3b3cb4c275384acd11d008a1c182ee7a051d5288089"} Feb 19 05:34:45 crc kubenswrapper[5012]: I0219 05:34:45.560459 5012 scope.go:117] "RemoveContainer" containerID="f28c70f18d16a390f7b96cc5399b8c6c7031b7f62ee2bccc4e33b9c7c28fc6a0" Feb 19 05:34:48 crc kubenswrapper[5012]: I0219 05:34:48.596011 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" event={"ID":"6ec62621-f2fc-41ba-b7d6-9a19035ca269","Type":"ContainerStarted","Data":"fe8f8afb2d48483b5ab4b267574be9c0db01ed9361a3f5c4ff9ac20c578a82b2"} Feb 19 05:34:50 crc kubenswrapper[5012]: I0219 05:34:50.611839 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" event={"ID":"6ec62621-f2fc-41ba-b7d6-9a19035ca269","Type":"ContainerStarted","Data":"29d0aacb9914b3a0468b4ce7f4f0e8db9c2aeda872ba1bcb4d157e2c3d94a9a3"} Feb 19 05:34:50 crc kubenswrapper[5012]: I0219 05:34:50.617401 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:50 crc kubenswrapper[5012]: I0219 05:34:50.617460 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:50 crc kubenswrapper[5012]: I0219 05:34:50.617482 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:50 crc kubenswrapper[5012]: I0219 05:34:50.660713 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:50 crc kubenswrapper[5012]: I0219 05:34:50.670384 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" podStartSLOduration=7.670347278 podStartE2EDuration="7.670347278s" podCreationTimestamp="2026-02-19 05:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:34:50.661066282 +0000 UTC m=+586.694388911" watchObservedRunningTime="2026-02-19 05:34:50.670347278 +0000 UTC m=+586.703669887" Feb 19 05:34:50 crc kubenswrapper[5012]: I0219 05:34:50.673923 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:34:57 crc kubenswrapper[5012]: I0219 05:34:57.703069 5012 scope.go:117] "RemoveContainer" containerID="9dee99959c58361002b098beb811940fb74ac9f7c81b432ebe5142128b4aec05" Feb 19 05:34:57 crc kubenswrapper[5012]: E0219 05:34:57.704024 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lkrsg_openshift-multus(e7a04e36-fbaa-4de1-871a-7225433eebb0)\"" pod="openshift-multus/multus-lkrsg" podUID="e7a04e36-fbaa-4de1-871a-7225433eebb0" Feb 19 05:35:05 crc kubenswrapper[5012]: I0219 05:35:05.088960 5012 scope.go:117] "RemoveContainer" containerID="4d96789a875fc9919836ff36dc1d21b427a832c3292532d47b588b770f2a75ed" Feb 19 05:35:05 crc kubenswrapper[5012]: I0219 05:35:05.119082 5012 scope.go:117] "RemoveContainer" containerID="bcadb8bab70733341b7bb0cee1dc27ad28111033c1f70563d157cf39fc870bc1" Feb 19 05:35:05 crc kubenswrapper[5012]: I0219 05:35:05.151716 5012 scope.go:117] "RemoveContainer" containerID="a95a4d514f0d6754b1714fed7c7959350d2abe5a30fa95a4004bef33fad2569c" Feb 19 05:35:05 crc kubenswrapper[5012]: I0219 05:35:05.179619 5012 scope.go:117] "RemoveContainer" containerID="ee07414de7a83d1212fd24fac006255c845d66e5f8765acbd5026e0f77d5182b" Feb 19 05:35:05 crc kubenswrapper[5012]: I0219 05:35:05.217172 5012 scope.go:117] "RemoveContainer" containerID="70dff26f289767b3751863d9c38507087e8b580a75adbd7af49ca49b727a95a9" Feb 19 05:35:05 crc kubenswrapper[5012]: I0219 05:35:05.238186 5012 scope.go:117] "RemoveContainer" containerID="b38c4d760b78c9580d7920d8d103f03ae36a4fb22594d35317c5a0fc8161982d" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.723674 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x"] Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.725701 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.728650 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.740737 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x"] Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.823076 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jfc4\" (UniqueName: \"kubernetes.io/projected/5efec1ed-3f58-4825-a63a-ceb26c38531e-kube-api-access-6jfc4\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.823159 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.823207 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.924759 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jfc4\" (UniqueName: \"kubernetes.io/projected/5efec1ed-3f58-4825-a63a-ceb26c38531e-kube-api-access-6jfc4\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.924854 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.924899 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.925674 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.926152 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:09 crc kubenswrapper[5012]: I0219 05:35:09.959879 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jfc4\" (UniqueName: \"kubernetes.io/projected/5efec1ed-3f58-4825-a63a-ceb26c38531e-kube-api-access-6jfc4\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:10 crc kubenswrapper[5012]: I0219 05:35:10.048602 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:10 crc kubenswrapper[5012]: E0219 05:35:10.079901 5012 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace_5efec1ed-3f58-4825-a63a-ceb26c38531e_0(d844d21b496239b13ccf49092f4ac1d3b9c1170b2358c57c90d4f29587085cef): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 05:35:10 crc kubenswrapper[5012]: E0219 05:35:10.079994 5012 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace_5efec1ed-3f58-4825-a63a-ceb26c38531e_0(d844d21b496239b13ccf49092f4ac1d3b9c1170b2358c57c90d4f29587085cef): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:10 crc kubenswrapper[5012]: E0219 05:35:10.080037 5012 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace_5efec1ed-3f58-4825-a63a-ceb26c38531e_0(d844d21b496239b13ccf49092f4ac1d3b9c1170b2358c57c90d4f29587085cef): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:10 crc kubenswrapper[5012]: E0219 05:35:10.080105 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace(5efec1ed-3f58-4825-a63a-ceb26c38531e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace(5efec1ed-3f58-4825-a63a-ceb26c38531e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace_5efec1ed-3f58-4825-a63a-ceb26c38531e_0(d844d21b496239b13ccf49092f4ac1d3b9c1170b2358c57c90d4f29587085cef): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" podUID="5efec1ed-3f58-4825-a63a-ceb26c38531e" Feb 19 05:35:10 crc kubenswrapper[5012]: I0219 05:35:10.703230 5012 scope.go:117] "RemoveContainer" containerID="9dee99959c58361002b098beb811940fb74ac9f7c81b432ebe5142128b4aec05" Feb 19 05:35:10 crc kubenswrapper[5012]: I0219 05:35:10.755992 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:10 crc kubenswrapper[5012]: I0219 05:35:10.756710 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:10 crc kubenswrapper[5012]: E0219 05:35:10.791207 5012 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace_5efec1ed-3f58-4825-a63a-ceb26c38531e_0(0563d3208f9a6bf6111665e5fefaab643ea03021bc30f4a24e9031267dfd98ea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 05:35:10 crc kubenswrapper[5012]: E0219 05:35:10.791363 5012 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace_5efec1ed-3f58-4825-a63a-ceb26c38531e_0(0563d3208f9a6bf6111665e5fefaab643ea03021bc30f4a24e9031267dfd98ea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:10 crc kubenswrapper[5012]: E0219 05:35:10.791405 5012 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace_5efec1ed-3f58-4825-a63a-ceb26c38531e_0(0563d3208f9a6bf6111665e5fefaab643ea03021bc30f4a24e9031267dfd98ea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:10 crc kubenswrapper[5012]: E0219 05:35:10.791496 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace(5efec1ed-3f58-4825-a63a-ceb26c38531e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace(5efec1ed-3f58-4825-a63a-ceb26c38531e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_openshift-marketplace_5efec1ed-3f58-4825-a63a-ceb26c38531e_0(0563d3208f9a6bf6111665e5fefaab643ea03021bc30f4a24e9031267dfd98ea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" podUID="5efec1ed-3f58-4825-a63a-ceb26c38531e" Feb 19 05:35:11 crc kubenswrapper[5012]: I0219 05:35:11.766371 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lkrsg_e7a04e36-fbaa-4de1-871a-7225433eebb0/kube-multus/2.log" Feb 19 05:35:11 crc kubenswrapper[5012]: I0219 05:35:11.767210 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lkrsg" event={"ID":"e7a04e36-fbaa-4de1-871a-7225433eebb0","Type":"ContainerStarted","Data":"28bdabb9481fea6fcdefabcabf2c194ea91f6504df441d4df357f2ecbc2368a6"} Feb 19 05:35:14 crc kubenswrapper[5012]: I0219 05:35:14.268325 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6qzx6" Feb 19 05:35:25 crc kubenswrapper[5012]: I0219 05:35:25.702634 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:25 crc kubenswrapper[5012]: I0219 05:35:25.704131 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:25 crc kubenswrapper[5012]: I0219 05:35:25.987149 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x"] Feb 19 05:35:26 crc kubenswrapper[5012]: I0219 05:35:26.879853 5012 generic.go:334] "Generic (PLEG): container finished" podID="5efec1ed-3f58-4825-a63a-ceb26c38531e" containerID="9fd43f54ae2cf94f3209642ea3f170f3fd0c1ae027d018b3ee9b11362794b164" exitCode=0 Feb 19 05:35:26 crc kubenswrapper[5012]: I0219 05:35:26.879908 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" event={"ID":"5efec1ed-3f58-4825-a63a-ceb26c38531e","Type":"ContainerDied","Data":"9fd43f54ae2cf94f3209642ea3f170f3fd0c1ae027d018b3ee9b11362794b164"} Feb 19 05:35:26 crc kubenswrapper[5012]: I0219 05:35:26.879940 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" event={"ID":"5efec1ed-3f58-4825-a63a-ceb26c38531e","Type":"ContainerStarted","Data":"2f48fe2eaaee966502a2840e3b0f88622eadd98f5013191e11a04a26705b4519"} Feb 19 05:35:28 crc kubenswrapper[5012]: I0219 05:35:28.896193 5012 generic.go:334] "Generic (PLEG): container finished" podID="5efec1ed-3f58-4825-a63a-ceb26c38531e" containerID="db55c4833a96e2ce6ad4aed68a18ae58441ae6680848e7568343d555f935b179" exitCode=0 Feb 19 05:35:28 crc kubenswrapper[5012]: I0219 05:35:28.896331 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" event={"ID":"5efec1ed-3f58-4825-a63a-ceb26c38531e","Type":"ContainerDied","Data":"db55c4833a96e2ce6ad4aed68a18ae58441ae6680848e7568343d555f935b179"} Feb 19 05:35:29 crc kubenswrapper[5012]: I0219 05:35:29.907916 5012 generic.go:334] "Generic (PLEG): container finished" podID="5efec1ed-3f58-4825-a63a-ceb26c38531e" containerID="29a03e0bb8c6b3d849b2e2d46284673c3830e2ee6a1ed817d423064737124419" exitCode=0 Feb 19 05:35:29 crc kubenswrapper[5012]: I0219 05:35:29.908151 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" event={"ID":"5efec1ed-3f58-4825-a63a-ceb26c38531e","Type":"ContainerDied","Data":"29a03e0bb8c6b3d849b2e2d46284673c3830e2ee6a1ed817d423064737124419"} Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.220391 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.352218 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-bundle\") pod \"5efec1ed-3f58-4825-a63a-ceb26c38531e\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.352289 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jfc4\" (UniqueName: \"kubernetes.io/projected/5efec1ed-3f58-4825-a63a-ceb26c38531e-kube-api-access-6jfc4\") pod \"5efec1ed-3f58-4825-a63a-ceb26c38531e\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.352375 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-util\") pod \"5efec1ed-3f58-4825-a63a-ceb26c38531e\" (UID: \"5efec1ed-3f58-4825-a63a-ceb26c38531e\") " Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.356454 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-bundle" (OuterVolumeSpecName: "bundle") pod "5efec1ed-3f58-4825-a63a-ceb26c38531e" (UID: "5efec1ed-3f58-4825-a63a-ceb26c38531e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.361007 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5efec1ed-3f58-4825-a63a-ceb26c38531e-kube-api-access-6jfc4" (OuterVolumeSpecName: "kube-api-access-6jfc4") pod "5efec1ed-3f58-4825-a63a-ceb26c38531e" (UID: "5efec1ed-3f58-4825-a63a-ceb26c38531e"). InnerVolumeSpecName "kube-api-access-6jfc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.384144 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-util" (OuterVolumeSpecName: "util") pod "5efec1ed-3f58-4825-a63a-ceb26c38531e" (UID: "5efec1ed-3f58-4825-a63a-ceb26c38531e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.455775 5012 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.455822 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jfc4\" (UniqueName: \"kubernetes.io/projected/5efec1ed-3f58-4825-a63a-ceb26c38531e-kube-api-access-6jfc4\") on node \"crc\" DevicePath \"\"" Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.455843 5012 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5efec1ed-3f58-4825-a63a-ceb26c38531e-util\") on node \"crc\" DevicePath \"\"" Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.926887 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" event={"ID":"5efec1ed-3f58-4825-a63a-ceb26c38531e","Type":"ContainerDied","Data":"2f48fe2eaaee966502a2840e3b0f88622eadd98f5013191e11a04a26705b4519"} Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.926944 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f48fe2eaaee966502a2840e3b0f88622eadd98f5013191e11a04a26705b4519" Feb 19 05:35:31 crc kubenswrapper[5012]: I0219 05:35:31.927052 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.322048 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t"] Feb 19 05:35:42 crc kubenswrapper[5012]: E0219 05:35:42.322927 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5efec1ed-3f58-4825-a63a-ceb26c38531e" containerName="util" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.322942 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5efec1ed-3f58-4825-a63a-ceb26c38531e" containerName="util" Feb 19 05:35:42 crc kubenswrapper[5012]: E0219 05:35:42.322965 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5efec1ed-3f58-4825-a63a-ceb26c38531e" containerName="extract" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.322973 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5efec1ed-3f58-4825-a63a-ceb26c38531e" containerName="extract" Feb 19 05:35:42 crc kubenswrapper[5012]: E0219 05:35:42.322990 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5efec1ed-3f58-4825-a63a-ceb26c38531e" containerName="pull" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.322998 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5efec1ed-3f58-4825-a63a-ceb26c38531e" containerName="pull" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.323112 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="5efec1ed-3f58-4825-a63a-ceb26c38531e" containerName="extract" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.323584 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.325785 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.326350 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-28dbr" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.327555 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.339375 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t"] Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.426032 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfmtm\" (UniqueName: \"kubernetes.io/projected/9f3d925a-f08d-4e92-baf3-805f27c9ae35-kube-api-access-zfmtm\") pod \"obo-prometheus-operator-68bc856cb9-9t66t\" (UID: \"9f3d925a-f08d-4e92-baf3-805f27c9ae35\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.476727 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp"] Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.477801 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.479969 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.481023 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-v9p5h" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.491457 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp"] Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.511698 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg"] Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.512406 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.522015 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg"] Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.527788 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9364b7f3-e3e3-4432-a4e7-4b80c9a50225-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-cddcp\" (UID: \"9364b7f3-e3e3-4432-a4e7-4b80c9a50225\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.527916 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3c60bb85-2242-4d9f-95f9-27b2e747727d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-rlcjg\" (UID: \"3c60bb85-2242-4d9f-95f9-27b2e747727d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.527949 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9364b7f3-e3e3-4432-a4e7-4b80c9a50225-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-cddcp\" (UID: \"9364b7f3-e3e3-4432-a4e7-4b80c9a50225\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.527974 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3c60bb85-2242-4d9f-95f9-27b2e747727d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-rlcjg\" (UID: \"3c60bb85-2242-4d9f-95f9-27b2e747727d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.528015 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfmtm\" (UniqueName: \"kubernetes.io/projected/9f3d925a-f08d-4e92-baf3-805f27c9ae35-kube-api-access-zfmtm\") pod \"obo-prometheus-operator-68bc856cb9-9t66t\" (UID: \"9f3d925a-f08d-4e92-baf3-805f27c9ae35\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.574680 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfmtm\" (UniqueName: \"kubernetes.io/projected/9f3d925a-f08d-4e92-baf3-805f27c9ae35-kube-api-access-zfmtm\") pod \"obo-prometheus-operator-68bc856cb9-9t66t\" (UID: \"9f3d925a-f08d-4e92-baf3-805f27c9ae35\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.628450 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9364b7f3-e3e3-4432-a4e7-4b80c9a50225-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-cddcp\" (UID: \"9364b7f3-e3e3-4432-a4e7-4b80c9a50225\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.628519 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3c60bb85-2242-4d9f-95f9-27b2e747727d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-rlcjg\" (UID: \"3c60bb85-2242-4d9f-95f9-27b2e747727d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.628543 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9364b7f3-e3e3-4432-a4e7-4b80c9a50225-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-cddcp\" (UID: \"9364b7f3-e3e3-4432-a4e7-4b80c9a50225\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.628575 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3c60bb85-2242-4d9f-95f9-27b2e747727d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-rlcjg\" (UID: \"3c60bb85-2242-4d9f-95f9-27b2e747727d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.631852 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9364b7f3-e3e3-4432-a4e7-4b80c9a50225-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-cddcp\" (UID: \"9364b7f3-e3e3-4432-a4e7-4b80c9a50225\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.632728 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9364b7f3-e3e3-4432-a4e7-4b80c9a50225-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-cddcp\" (UID: \"9364b7f3-e3e3-4432-a4e7-4b80c9a50225\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.645557 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.645594 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3c60bb85-2242-4d9f-95f9-27b2e747727d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-rlcjg\" (UID: \"3c60bb85-2242-4d9f-95f9-27b2e747727d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.647999 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-vw7xl"] Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.648659 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.655189 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-r5qcs" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.655393 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3c60bb85-2242-4d9f-95f9-27b2e747727d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-685558f558-rlcjg\" (UID: \"3c60bb85-2242-4d9f-95f9-27b2e747727d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.655822 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.661000 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-vw7xl"] Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.729997 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dkxz\" (UniqueName: \"kubernetes.io/projected/63ee166b-5027-4928-9196-9488685f87d5-kube-api-access-6dkxz\") pod \"observability-operator-59bdc8b94-vw7xl\" (UID: \"63ee166b-5027-4928-9196-9488685f87d5\") " pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.730055 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/63ee166b-5027-4928-9196-9488685f87d5-observability-operator-tls\") pod \"observability-operator-59bdc8b94-vw7xl\" (UID: \"63ee166b-5027-4928-9196-9488685f87d5\") " pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.772060 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-5grbr"] Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.772748 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5grbr" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.775102 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-czkvc" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.785366 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-5grbr"] Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.793647 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.831377 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dkxz\" (UniqueName: \"kubernetes.io/projected/63ee166b-5027-4928-9196-9488685f87d5-kube-api-access-6dkxz\") pod \"observability-operator-59bdc8b94-vw7xl\" (UID: \"63ee166b-5027-4928-9196-9488685f87d5\") " pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.831433 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/63ee166b-5027-4928-9196-9488685f87d5-observability-operator-tls\") pod \"observability-operator-59bdc8b94-vw7xl\" (UID: \"63ee166b-5027-4928-9196-9488685f87d5\") " pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.831474 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/86bcbf15-9553-41af-974c-3418e588e575-openshift-service-ca\") pod \"perses-operator-5bf474d74f-5grbr\" (UID: \"86bcbf15-9553-41af-974c-3418e588e575\") " pod="openshift-operators/perses-operator-5bf474d74f-5grbr" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.831496 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd5lg\" (UniqueName: \"kubernetes.io/projected/86bcbf15-9553-41af-974c-3418e588e575-kube-api-access-hd5lg\") pod \"perses-operator-5bf474d74f-5grbr\" (UID: \"86bcbf15-9553-41af-974c-3418e588e575\") " pod="openshift-operators/perses-operator-5bf474d74f-5grbr" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.832220 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.848996 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/63ee166b-5027-4928-9196-9488685f87d5-observability-operator-tls\") pod \"observability-operator-59bdc8b94-vw7xl\" (UID: \"63ee166b-5027-4928-9196-9488685f87d5\") " pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.850346 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dkxz\" (UniqueName: \"kubernetes.io/projected/63ee166b-5027-4928-9196-9488685f87d5-kube-api-access-6dkxz\") pod \"observability-operator-59bdc8b94-vw7xl\" (UID: \"63ee166b-5027-4928-9196-9488685f87d5\") " pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.938681 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/86bcbf15-9553-41af-974c-3418e588e575-openshift-service-ca\") pod \"perses-operator-5bf474d74f-5grbr\" (UID: \"86bcbf15-9553-41af-974c-3418e588e575\") " pod="openshift-operators/perses-operator-5bf474d74f-5grbr" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.938756 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd5lg\" (UniqueName: \"kubernetes.io/projected/86bcbf15-9553-41af-974c-3418e588e575-kube-api-access-hd5lg\") pod \"perses-operator-5bf474d74f-5grbr\" (UID: \"86bcbf15-9553-41af-974c-3418e588e575\") " pod="openshift-operators/perses-operator-5bf474d74f-5grbr" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.939539 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/86bcbf15-9553-41af-974c-3418e588e575-openshift-service-ca\") pod \"perses-operator-5bf474d74f-5grbr\" (UID: \"86bcbf15-9553-41af-974c-3418e588e575\") " pod="openshift-operators/perses-operator-5bf474d74f-5grbr" Feb 19 05:35:42 crc kubenswrapper[5012]: I0219 05:35:42.958094 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd5lg\" (UniqueName: \"kubernetes.io/projected/86bcbf15-9553-41af-974c-3418e588e575-kube-api-access-hd5lg\") pod \"perses-operator-5bf474d74f-5grbr\" (UID: \"86bcbf15-9553-41af-974c-3418e588e575\") " pod="openshift-operators/perses-operator-5bf474d74f-5grbr" Feb 19 05:35:43 crc kubenswrapper[5012]: I0219 05:35:43.014236 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" Feb 19 05:35:43 crc kubenswrapper[5012]: I0219 05:35:43.099159 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5grbr" Feb 19 05:35:43 crc kubenswrapper[5012]: I0219 05:35:43.176682 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t"] Feb 19 05:35:43 crc kubenswrapper[5012]: I0219 05:35:43.181202 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg"] Feb 19 05:35:43 crc kubenswrapper[5012]: I0219 05:35:43.260342 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-vw7xl"] Feb 19 05:35:43 crc kubenswrapper[5012]: W0219 05:35:43.266837 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63ee166b_5027_4928_9196_9488685f87d5.slice/crio-f0cf2d5dba7517c5301cb01f9524808995d09d3ee5196cb589bec114b2db0f7b WatchSource:0}: Error finding container f0cf2d5dba7517c5301cb01f9524808995d09d3ee5196cb589bec114b2db0f7b: Status 404 returned error can't find the container with id f0cf2d5dba7517c5301cb01f9524808995d09d3ee5196cb589bec114b2db0f7b Feb 19 05:35:43 crc kubenswrapper[5012]: I0219 05:35:43.319111 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp"] Feb 19 05:35:43 crc kubenswrapper[5012]: I0219 05:35:43.330619 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-5grbr"] Feb 19 05:35:43 crc kubenswrapper[5012]: I0219 05:35:43.998506 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t" event={"ID":"9f3d925a-f08d-4e92-baf3-805f27c9ae35","Type":"ContainerStarted","Data":"8076f3a76096d678df7cb75d647a683c504432ac387ddb0b69792742e06b83cb"} Feb 19 05:35:44 crc kubenswrapper[5012]: I0219 05:35:44.000003 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" event={"ID":"3c60bb85-2242-4d9f-95f9-27b2e747727d","Type":"ContainerStarted","Data":"decbfafcdffb05c9646234dc88d5c9108df84f29d3a2946013f63bb0104908ff"} Feb 19 05:35:44 crc kubenswrapper[5012]: I0219 05:35:44.001280 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" event={"ID":"63ee166b-5027-4928-9196-9488685f87d5","Type":"ContainerStarted","Data":"f0cf2d5dba7517c5301cb01f9524808995d09d3ee5196cb589bec114b2db0f7b"} Feb 19 05:35:44 crc kubenswrapper[5012]: I0219 05:35:44.002576 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" event={"ID":"9364b7f3-e3e3-4432-a4e7-4b80c9a50225","Type":"ContainerStarted","Data":"e169d28e6f802b174e3c8499da1ef41a5d88851bf8b88dd01bc25b22dcb10dd8"} Feb 19 05:35:44 crc kubenswrapper[5012]: I0219 05:35:44.003650 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-5grbr" event={"ID":"86bcbf15-9553-41af-974c-3418e588e575","Type":"ContainerStarted","Data":"9716acd1be5c14c16b4e3470d339d6f4750bdb4784f8703513acb9355a92e079"} Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.086112 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t" event={"ID":"9f3d925a-f08d-4e92-baf3-805f27c9ae35","Type":"ContainerStarted","Data":"3c7b4b34ad99c637f07f3ff0340c42839989e82c30118f8b2a7ee3c62fe12e84"} Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.088780 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" event={"ID":"3c60bb85-2242-4d9f-95f9-27b2e747727d","Type":"ContainerStarted","Data":"899a5845869dfcfeb04ba83980f09110794eca3bb997776543ade74ecee7195e"} Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.090960 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" event={"ID":"63ee166b-5027-4928-9196-9488685f87d5","Type":"ContainerStarted","Data":"55dba0532de292e84f9dc303c6cb95587b6ad9eb2f0ca1604e680efedae4a4b0"} Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.091593 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.095210 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" event={"ID":"9364b7f3-e3e3-4432-a4e7-4b80c9a50225","Type":"ContainerStarted","Data":"69d4dd462bcfd2e048a40def5db79d35be38edd4f7d97212498635ad6b153f73"} Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.098269 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-5grbr" event={"ID":"86bcbf15-9553-41af-974c-3418e588e575","Type":"ContainerStarted","Data":"c47943eb452a908c828c9e6d64b9e585df12cc83ddf496e5fb1fce96614030ac"} Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.098959 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-5grbr" Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.125492 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.131417 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9t66t" podStartSLOduration=2.362615518 podStartE2EDuration="12.131389898s" podCreationTimestamp="2026-02-19 05:35:42 +0000 UTC" firstStartedPulling="2026-02-19 05:35:43.216796964 +0000 UTC m=+639.250119533" lastFinishedPulling="2026-02-19 05:35:52.985571304 +0000 UTC m=+649.018893913" observedRunningTime="2026-02-19 05:35:54.115832119 +0000 UTC m=+650.149154718" watchObservedRunningTime="2026-02-19 05:35:54.131389898 +0000 UTC m=+650.164712507" Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.143901 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-rlcjg" podStartSLOduration=2.322943669 podStartE2EDuration="12.143872972s" podCreationTimestamp="2026-02-19 05:35:42 +0000 UTC" firstStartedPulling="2026-02-19 05:35:43.200480246 +0000 UTC m=+639.233802815" lastFinishedPulling="2026-02-19 05:35:53.021409539 +0000 UTC m=+649.054732118" observedRunningTime="2026-02-19 05:35:54.143015061 +0000 UTC m=+650.176337710" watchObservedRunningTime="2026-02-19 05:35:54.143872972 +0000 UTC m=+650.177195581" Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.186778 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-vw7xl" podStartSLOduration=2.4357313879999998 podStartE2EDuration="12.186763526s" podCreationTimestamp="2026-02-19 05:35:42 +0000 UTC" firstStartedPulling="2026-02-19 05:35:43.268937136 +0000 UTC m=+639.302259705" lastFinishedPulling="2026-02-19 05:35:53.019969264 +0000 UTC m=+649.053291843" observedRunningTime="2026-02-19 05:35:54.184781297 +0000 UTC m=+650.218103866" watchObservedRunningTime="2026-02-19 05:35:54.186763526 +0000 UTC m=+650.220086095" Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.204040 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-685558f558-cddcp" podStartSLOduration=2.524905173 podStartE2EDuration="12.204016956s" podCreationTimestamp="2026-02-19 05:35:42 +0000 UTC" firstStartedPulling="2026-02-19 05:35:43.310945701 +0000 UTC m=+639.344268270" lastFinishedPulling="2026-02-19 05:35:52.990057474 +0000 UTC m=+649.023380053" observedRunningTime="2026-02-19 05:35:54.202182151 +0000 UTC m=+650.235504730" watchObservedRunningTime="2026-02-19 05:35:54.204016956 +0000 UTC m=+650.237339535" Feb 19 05:35:54 crc kubenswrapper[5012]: I0219 05:35:54.222906 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-5grbr" podStartSLOduration=2.578582561 podStartE2EDuration="12.222884615s" podCreationTimestamp="2026-02-19 05:35:42 +0000 UTC" firstStartedPulling="2026-02-19 05:35:43.342604103 +0000 UTC m=+639.375926662" lastFinishedPulling="2026-02-19 05:35:52.986906127 +0000 UTC m=+649.020228716" observedRunningTime="2026-02-19 05:35:54.222450464 +0000 UTC m=+650.255773273" watchObservedRunningTime="2026-02-19 05:35:54.222884615 +0000 UTC m=+650.256207194" Feb 19 05:36:03 crc kubenswrapper[5012]: I0219 05:36:03.103841 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-5grbr" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.454768 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj"] Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.456237 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.460142 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.472947 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj"] Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.588468 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tkl5\" (UniqueName: \"kubernetes.io/projected/6865121b-f9c2-439e-a64a-bf7d94f35797-kube-api-access-7tkl5\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.588922 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.589047 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.690452 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tkl5\" (UniqueName: \"kubernetes.io/projected/6865121b-f9c2-439e-a64a-bf7d94f35797-kube-api-access-7tkl5\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.690541 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.690621 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.691393 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.691496 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.719966 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tkl5\" (UniqueName: \"kubernetes.io/projected/6865121b-f9c2-439e-a64a-bf7d94f35797-kube-api-access-7tkl5\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:20 crc kubenswrapper[5012]: I0219 05:36:20.782670 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:21 crc kubenswrapper[5012]: I0219 05:36:21.053181 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj"] Feb 19 05:36:21 crc kubenswrapper[5012]: I0219 05:36:21.289332 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" event={"ID":"6865121b-f9c2-439e-a64a-bf7d94f35797","Type":"ContainerStarted","Data":"eee2384060ea0dda1f6a8bbcbf1f6151ab035c791f851aa6cea893c67654a899"} Feb 19 05:36:21 crc kubenswrapper[5012]: I0219 05:36:21.289396 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" event={"ID":"6865121b-f9c2-439e-a64a-bf7d94f35797","Type":"ContainerStarted","Data":"41b8a51b0e4093530db1c36c82a26b069c40f43543a37b05d0d8145db64abbec"} Feb 19 05:36:22 crc kubenswrapper[5012]: I0219 05:36:22.298765 5012 generic.go:334] "Generic (PLEG): container finished" podID="6865121b-f9c2-439e-a64a-bf7d94f35797" containerID="eee2384060ea0dda1f6a8bbcbf1f6151ab035c791f851aa6cea893c67654a899" exitCode=0 Feb 19 05:36:22 crc kubenswrapper[5012]: I0219 05:36:22.298858 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" event={"ID":"6865121b-f9c2-439e-a64a-bf7d94f35797","Type":"ContainerDied","Data":"eee2384060ea0dda1f6a8bbcbf1f6151ab035c791f851aa6cea893c67654a899"} Feb 19 05:36:24 crc kubenswrapper[5012]: I0219 05:36:24.316269 5012 generic.go:334] "Generic (PLEG): container finished" podID="6865121b-f9c2-439e-a64a-bf7d94f35797" containerID="bf2f5f9a6d89a4a351477dea01062227c1f4f678c68a29d37d63976636f6c613" exitCode=0 Feb 19 05:36:24 crc kubenswrapper[5012]: I0219 05:36:24.316374 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" event={"ID":"6865121b-f9c2-439e-a64a-bf7d94f35797","Type":"ContainerDied","Data":"bf2f5f9a6d89a4a351477dea01062227c1f4f678c68a29d37d63976636f6c613"} Feb 19 05:36:25 crc kubenswrapper[5012]: I0219 05:36:25.329498 5012 generic.go:334] "Generic (PLEG): container finished" podID="6865121b-f9c2-439e-a64a-bf7d94f35797" containerID="4f467b3a4163df5642ab20e77f91848f63dec61e4b4430bcf21fe04d298fd6a7" exitCode=0 Feb 19 05:36:25 crc kubenswrapper[5012]: I0219 05:36:25.329574 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" event={"ID":"6865121b-f9c2-439e-a64a-bf7d94f35797","Type":"ContainerDied","Data":"4f467b3a4163df5642ab20e77f91848f63dec61e4b4430bcf21fe04d298fd6a7"} Feb 19 05:36:26 crc kubenswrapper[5012]: I0219 05:36:26.691244 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:26 crc kubenswrapper[5012]: I0219 05:36:26.773787 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tkl5\" (UniqueName: \"kubernetes.io/projected/6865121b-f9c2-439e-a64a-bf7d94f35797-kube-api-access-7tkl5\") pod \"6865121b-f9c2-439e-a64a-bf7d94f35797\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " Feb 19 05:36:26 crc kubenswrapper[5012]: I0219 05:36:26.773835 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-util\") pod \"6865121b-f9c2-439e-a64a-bf7d94f35797\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " Feb 19 05:36:26 crc kubenswrapper[5012]: I0219 05:36:26.773954 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-bundle\") pod \"6865121b-f9c2-439e-a64a-bf7d94f35797\" (UID: \"6865121b-f9c2-439e-a64a-bf7d94f35797\") " Feb 19 05:36:26 crc kubenswrapper[5012]: I0219 05:36:26.775022 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-bundle" (OuterVolumeSpecName: "bundle") pod "6865121b-f9c2-439e-a64a-bf7d94f35797" (UID: "6865121b-f9c2-439e-a64a-bf7d94f35797"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:36:26 crc kubenswrapper[5012]: I0219 05:36:26.784554 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6865121b-f9c2-439e-a64a-bf7d94f35797-kube-api-access-7tkl5" (OuterVolumeSpecName: "kube-api-access-7tkl5") pod "6865121b-f9c2-439e-a64a-bf7d94f35797" (UID: "6865121b-f9c2-439e-a64a-bf7d94f35797"). InnerVolumeSpecName "kube-api-access-7tkl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:36:26 crc kubenswrapper[5012]: I0219 05:36:26.797954 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-util" (OuterVolumeSpecName: "util") pod "6865121b-f9c2-439e-a64a-bf7d94f35797" (UID: "6865121b-f9c2-439e-a64a-bf7d94f35797"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:36:26 crc kubenswrapper[5012]: I0219 05:36:26.875693 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tkl5\" (UniqueName: \"kubernetes.io/projected/6865121b-f9c2-439e-a64a-bf7d94f35797-kube-api-access-7tkl5\") on node \"crc\" DevicePath \"\"" Feb 19 05:36:26 crc kubenswrapper[5012]: I0219 05:36:26.875743 5012 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-util\") on node \"crc\" DevicePath \"\"" Feb 19 05:36:26 crc kubenswrapper[5012]: I0219 05:36:26.875765 5012 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6865121b-f9c2-439e-a64a-bf7d94f35797-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:36:27 crc kubenswrapper[5012]: I0219 05:36:27.348826 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" event={"ID":"6865121b-f9c2-439e-a64a-bf7d94f35797","Type":"ContainerDied","Data":"41b8a51b0e4093530db1c36c82a26b069c40f43543a37b05d0d8145db64abbec"} Feb 19 05:36:27 crc kubenswrapper[5012]: I0219 05:36:27.348901 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41b8a51b0e4093530db1c36c82a26b069c40f43543a37b05d0d8145db64abbec" Feb 19 05:36:27 crc kubenswrapper[5012]: I0219 05:36:27.348932 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.181567 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-2smgj"] Feb 19 05:36:32 crc kubenswrapper[5012]: E0219 05:36:32.181923 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6865121b-f9c2-439e-a64a-bf7d94f35797" containerName="util" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.181945 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6865121b-f9c2-439e-a64a-bf7d94f35797" containerName="util" Feb 19 05:36:32 crc kubenswrapper[5012]: E0219 05:36:32.181980 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6865121b-f9c2-439e-a64a-bf7d94f35797" containerName="extract" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.181993 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6865121b-f9c2-439e-a64a-bf7d94f35797" containerName="extract" Feb 19 05:36:32 crc kubenswrapper[5012]: E0219 05:36:32.182018 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6865121b-f9c2-439e-a64a-bf7d94f35797" containerName="pull" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.182032 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6865121b-f9c2-439e-a64a-bf7d94f35797" containerName="pull" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.182231 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6865121b-f9c2-439e-a64a-bf7d94f35797" containerName="extract" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.182883 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-2smgj" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.185014 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-g5dkw" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.187398 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.187562 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.197626 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-2smgj"] Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.277725 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c28mk\" (UniqueName: \"kubernetes.io/projected/d6ac1260-4ff8-4025-af6e-35711452ef6f-kube-api-access-c28mk\") pod \"nmstate-operator-694c9596b7-2smgj\" (UID: \"d6ac1260-4ff8-4025-af6e-35711452ef6f\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-2smgj" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.378778 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c28mk\" (UniqueName: \"kubernetes.io/projected/d6ac1260-4ff8-4025-af6e-35711452ef6f-kube-api-access-c28mk\") pod \"nmstate-operator-694c9596b7-2smgj\" (UID: \"d6ac1260-4ff8-4025-af6e-35711452ef6f\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-2smgj" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.404427 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c28mk\" (UniqueName: \"kubernetes.io/projected/d6ac1260-4ff8-4025-af6e-35711452ef6f-kube-api-access-c28mk\") pod \"nmstate-operator-694c9596b7-2smgj\" (UID: \"d6ac1260-4ff8-4025-af6e-35711452ef6f\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-2smgj" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.500959 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-2smgj" Feb 19 05:36:32 crc kubenswrapper[5012]: I0219 05:36:32.782871 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-2smgj"] Feb 19 05:36:33 crc kubenswrapper[5012]: I0219 05:36:33.393604 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-2smgj" event={"ID":"d6ac1260-4ff8-4025-af6e-35711452ef6f","Type":"ContainerStarted","Data":"b566bb203443a360511b0257d1a4b867989faad6c3289560d1391e06254cf1be"} Feb 19 05:36:35 crc kubenswrapper[5012]: I0219 05:36:35.417383 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-2smgj" event={"ID":"d6ac1260-4ff8-4025-af6e-35711452ef6f","Type":"ContainerStarted","Data":"743a418753ca9e4577b3915d68bcb88a6f09b8f67d23526601079c9e85323f7c"} Feb 19 05:36:35 crc kubenswrapper[5012]: I0219 05:36:35.444092 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-2smgj" podStartSLOduration=0.979862993 podStartE2EDuration="3.444059026s" podCreationTimestamp="2026-02-19 05:36:32 +0000 UTC" firstStartedPulling="2026-02-19 05:36:32.795274569 +0000 UTC m=+688.828597138" lastFinishedPulling="2026-02-19 05:36:35.259470602 +0000 UTC m=+691.292793171" observedRunningTime="2026-02-19 05:36:35.43394123 +0000 UTC m=+691.467263809" watchObservedRunningTime="2026-02-19 05:36:35.444059026 +0000 UTC m=+691.477381635" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.466569 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-hn274"] Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.467873 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-hn274" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.471436 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-2zxtm" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.491815 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh"] Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.492597 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.495147 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.518794 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh"] Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.521919 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-tdz8p"] Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.522579 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.543408 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-hn274"] Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.624915 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzq4v\" (UniqueName: \"kubernetes.io/projected/50749fb3-e43e-4874-a0ea-8dabae225f85-kube-api-access-fzq4v\") pod \"nmstate-webhook-866bcb46dc-mqtfh\" (UID: \"50749fb3-e43e-4874-a0ea-8dabae225f85\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.624975 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/50749fb3-e43e-4874-a0ea-8dabae225f85-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-mqtfh\" (UID: \"50749fb3-e43e-4874-a0ea-8dabae225f85\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.625002 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z84tw\" (UniqueName: \"kubernetes.io/projected/4b5e9e17-84bc-4d05-87f9-328826ea39df-kube-api-access-z84tw\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.625049 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4b5e9e17-84bc-4d05-87f9-328826ea39df-ovs-socket\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.625068 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4b5e9e17-84bc-4d05-87f9-328826ea39df-nmstate-lock\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.625101 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4b5e9e17-84bc-4d05-87f9-328826ea39df-dbus-socket\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.625201 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkhk2\" (UniqueName: \"kubernetes.io/projected/91d45b3f-23b3-4342-8168-667f665ffe82-kube-api-access-xkhk2\") pod \"nmstate-metrics-58c85c668d-hn274\" (UID: \"91d45b3f-23b3-4342-8168-667f665ffe82\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-hn274" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.645054 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62"] Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.645874 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.650387 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.650384 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.650624 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-r25p2" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.681067 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62"] Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726377 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/50749fb3-e43e-4874-a0ea-8dabae225f85-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-mqtfh\" (UID: \"50749fb3-e43e-4874-a0ea-8dabae225f85\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726417 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z84tw\" (UniqueName: \"kubernetes.io/projected/4b5e9e17-84bc-4d05-87f9-328826ea39df-kube-api-access-z84tw\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726450 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-zvl62\" (UID: \"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726483 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4b5e9e17-84bc-4d05-87f9-328826ea39df-ovs-socket\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726501 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4b5e9e17-84bc-4d05-87f9-328826ea39df-nmstate-lock\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726521 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqccj\" (UniqueName: \"kubernetes.io/projected/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-kube-api-access-tqccj\") pod \"nmstate-console-plugin-5c78fc5d65-zvl62\" (UID: \"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726541 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4b5e9e17-84bc-4d05-87f9-328826ea39df-dbus-socket\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: E0219 05:36:41.726562 5012 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 19 05:36:41 crc kubenswrapper[5012]: E0219 05:36:41.726635 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50749fb3-e43e-4874-a0ea-8dabae225f85-tls-key-pair podName:50749fb3-e43e-4874-a0ea-8dabae225f85 nodeName:}" failed. No retries permitted until 2026-02-19 05:36:42.226618077 +0000 UTC m=+698.259940636 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/50749fb3-e43e-4874-a0ea-8dabae225f85-tls-key-pair") pod "nmstate-webhook-866bcb46dc-mqtfh" (UID: "50749fb3-e43e-4874-a0ea-8dabae225f85") : secret "openshift-nmstate-webhook" not found Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726631 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4b5e9e17-84bc-4d05-87f9-328826ea39df-nmstate-lock\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726717 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4b5e9e17-84bc-4d05-87f9-328826ea39df-ovs-socket\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726566 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkhk2\" (UniqueName: \"kubernetes.io/projected/91d45b3f-23b3-4342-8168-667f665ffe82-kube-api-access-xkhk2\") pod \"nmstate-metrics-58c85c668d-hn274\" (UID: \"91d45b3f-23b3-4342-8168-667f665ffe82\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-hn274" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726819 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4b5e9e17-84bc-4d05-87f9-328826ea39df-dbus-socket\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726945 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzq4v\" (UniqueName: \"kubernetes.io/projected/50749fb3-e43e-4874-a0ea-8dabae225f85-kube-api-access-fzq4v\") pod \"nmstate-webhook-866bcb46dc-mqtfh\" (UID: \"50749fb3-e43e-4874-a0ea-8dabae225f85\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.726993 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-zvl62\" (UID: \"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.754178 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z84tw\" (UniqueName: \"kubernetes.io/projected/4b5e9e17-84bc-4d05-87f9-328826ea39df-kube-api-access-z84tw\") pod \"nmstate-handler-tdz8p\" (UID: \"4b5e9e17-84bc-4d05-87f9-328826ea39df\") " pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.760078 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzq4v\" (UniqueName: \"kubernetes.io/projected/50749fb3-e43e-4874-a0ea-8dabae225f85-kube-api-access-fzq4v\") pod \"nmstate-webhook-866bcb46dc-mqtfh\" (UID: \"50749fb3-e43e-4874-a0ea-8dabae225f85\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.774949 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkhk2\" (UniqueName: \"kubernetes.io/projected/91d45b3f-23b3-4342-8168-667f665ffe82-kube-api-access-xkhk2\") pod \"nmstate-metrics-58c85c668d-hn274\" (UID: \"91d45b3f-23b3-4342-8168-667f665ffe82\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-hn274" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.786450 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-hn274" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.828575 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqccj\" (UniqueName: \"kubernetes.io/projected/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-kube-api-access-tqccj\") pod \"nmstate-console-plugin-5c78fc5d65-zvl62\" (UID: \"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.828700 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-zvl62\" (UID: \"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.829391 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6c49886887-28b5c"] Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.829522 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-zvl62\" (UID: \"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:41 crc kubenswrapper[5012]: E0219 05:36:41.829675 5012 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 19 05:36:41 crc kubenswrapper[5012]: E0219 05:36:41.829719 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-plugin-serving-cert podName:0aad4d6c-fc60-4843-b21b-d4ad6d552d5f nodeName:}" failed. No retries permitted until 2026-02-19 05:36:42.329707167 +0000 UTC m=+698.363029726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-zvl62" (UID: "0aad4d6c-fc60-4843-b21b-d4ad6d552d5f") : secret "plugin-serving-cert" not found Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.829609 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-zvl62\" (UID: \"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.830040 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.839665 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c49886887-28b5c"] Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.843345 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.853495 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqccj\" (UniqueName: \"kubernetes.io/projected/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-kube-api-access-tqccj\") pod \"nmstate-console-plugin-5c78fc5d65-zvl62\" (UID: \"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:41 crc kubenswrapper[5012]: W0219 05:36:41.870409 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b5e9e17_84bc_4d05_87f9_328826ea39df.slice/crio-c01bd16b9f9be33458b34ba5f390b2ab5324a437aad2a02893e31162444e0749 WatchSource:0}: Error finding container c01bd16b9f9be33458b34ba5f390b2ab5324a437aad2a02893e31162444e0749: Status 404 returned error can't find the container with id c01bd16b9f9be33458b34ba5f390b2ab5324a437aad2a02893e31162444e0749 Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.930539 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-console-config\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.930753 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-console-oauth-config\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.930769 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-service-ca\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.930835 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-oauth-serving-cert\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.930852 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-trusted-ca-bundle\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.930987 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-console-serving-cert\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:41 crc kubenswrapper[5012]: I0219 05:36:41.931542 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jmhb\" (UniqueName: \"kubernetes.io/projected/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-kube-api-access-8jmhb\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.038929 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-oauth-serving-cert\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.038959 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-trusted-ca-bundle\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.038977 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-console-serving-cert\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.039017 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jmhb\" (UniqueName: \"kubernetes.io/projected/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-kube-api-access-8jmhb\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.039041 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-console-config\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.039058 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-service-ca\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.039072 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-console-oauth-config\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.040714 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-oauth-serving-cert\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.040836 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-trusted-ca-bundle\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.041419 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-service-ca\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.041536 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-console-config\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.042186 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-console-oauth-config\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.043015 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-console-serving-cert\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.054839 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jmhb\" (UniqueName: \"kubernetes.io/projected/7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3-kube-api-access-8jmhb\") pod \"console-6c49886887-28b5c\" (UID: \"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3\") " pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.170897 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.242377 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-hn274"] Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.242931 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/50749fb3-e43e-4874-a0ea-8dabae225f85-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-mqtfh\" (UID: \"50749fb3-e43e-4874-a0ea-8dabae225f85\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.249792 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/50749fb3-e43e-4874-a0ea-8dabae225f85-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-mqtfh\" (UID: \"50749fb3-e43e-4874-a0ea-8dabae225f85\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.344103 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-zvl62\" (UID: \"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.351734 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0aad4d6c-fc60-4843-b21b-d4ad6d552d5f-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-zvl62\" (UID: \"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.416639 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.462218 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c49886887-28b5c"] Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.467796 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tdz8p" event={"ID":"4b5e9e17-84bc-4d05-87f9-328826ea39df","Type":"ContainerStarted","Data":"c01bd16b9f9be33458b34ba5f390b2ab5324a437aad2a02893e31162444e0749"} Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.469427 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-hn274" event={"ID":"91d45b3f-23b3-4342-8168-667f665ffe82","Type":"ContainerStarted","Data":"9abf528d64ef1cd29e11d2a0fcd35d1c6a7d8e2f88a349bd620e93919a5c704e"} Feb 19 05:36:42 crc kubenswrapper[5012]: W0219 05:36:42.484513 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7609fb75_2a23_43bd_9cbd_6cc14fd4e7d3.slice/crio-cdd2660d1171d3cedd977d709fd7da7e312b89542e7fc09428a7f2b4b0de097d WatchSource:0}: Error finding container cdd2660d1171d3cedd977d709fd7da7e312b89542e7fc09428a7f2b4b0de097d: Status 404 returned error can't find the container with id cdd2660d1171d3cedd977d709fd7da7e312b89542e7fc09428a7f2b4b0de097d Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.564766 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.644822 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh"] Feb 19 05:36:42 crc kubenswrapper[5012]: W0219 05:36:42.671773 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50749fb3_e43e_4874_a0ea_8dabae225f85.slice/crio-c42040a163a7d736ad6c3effbdf415fc53051c1d4bb189c4408a774f08a94ef1 WatchSource:0}: Error finding container c42040a163a7d736ad6c3effbdf415fc53051c1d4bb189c4408a774f08a94ef1: Status 404 returned error can't find the container with id c42040a163a7d736ad6c3effbdf415fc53051c1d4bb189c4408a774f08a94ef1 Feb 19 05:36:42 crc kubenswrapper[5012]: I0219 05:36:42.776857 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62"] Feb 19 05:36:43 crc kubenswrapper[5012]: I0219 05:36:43.476325 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c49886887-28b5c" event={"ID":"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3","Type":"ContainerStarted","Data":"f54e52e0ca8ec5ffb6b3c2bd79452dabb4e05ecc5ad24f67ae5f2adec41dd2c5"} Feb 19 05:36:43 crc kubenswrapper[5012]: I0219 05:36:43.476550 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c49886887-28b5c" event={"ID":"7609fb75-2a23-43bd-9cbd-6cc14fd4e7d3","Type":"ContainerStarted","Data":"cdd2660d1171d3cedd977d709fd7da7e312b89542e7fc09428a7f2b4b0de097d"} Feb 19 05:36:43 crc kubenswrapper[5012]: I0219 05:36:43.479464 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" event={"ID":"50749fb3-e43e-4874-a0ea-8dabae225f85","Type":"ContainerStarted","Data":"c42040a163a7d736ad6c3effbdf415fc53051c1d4bb189c4408a774f08a94ef1"} Feb 19 05:36:43 crc kubenswrapper[5012]: I0219 05:36:43.491293 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" event={"ID":"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f","Type":"ContainerStarted","Data":"89b7912ba7d901985484936a1e4bee156a2e0839180714b2e500df56277ee32d"} Feb 19 05:36:43 crc kubenswrapper[5012]: I0219 05:36:43.501527 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6c49886887-28b5c" podStartSLOduration=2.5015138930000003 podStartE2EDuration="2.501513893s" podCreationTimestamp="2026-02-19 05:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:36:43.498141361 +0000 UTC m=+699.531463930" watchObservedRunningTime="2026-02-19 05:36:43.501513893 +0000 UTC m=+699.534836462" Feb 19 05:36:44 crc kubenswrapper[5012]: I0219 05:36:44.430079 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:36:44 crc kubenswrapper[5012]: I0219 05:36:44.430408 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:36:45 crc kubenswrapper[5012]: I0219 05:36:45.508988 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" event={"ID":"50749fb3-e43e-4874-a0ea-8dabae225f85","Type":"ContainerStarted","Data":"4de46efda103b843bc38d058ff6e262f4b74d298b4bf38dbe0e208847b977f1a"} Feb 19 05:36:45 crc kubenswrapper[5012]: I0219 05:36:45.509559 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:36:45 crc kubenswrapper[5012]: I0219 05:36:45.511131 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-tdz8p" event={"ID":"4b5e9e17-84bc-4d05-87f9-328826ea39df","Type":"ContainerStarted","Data":"c16ced7211ef93693e6f299e4ce037de04bf60cf31df271b639ed4210561fc8b"} Feb 19 05:36:45 crc kubenswrapper[5012]: I0219 05:36:45.512044 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:45 crc kubenswrapper[5012]: I0219 05:36:45.513599 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-hn274" event={"ID":"91d45b3f-23b3-4342-8168-667f665ffe82","Type":"ContainerStarted","Data":"5f4756ba306c4cb4ce549a37b37941c92f75f51695101de27b400b9213002744"} Feb 19 05:36:45 crc kubenswrapper[5012]: I0219 05:36:45.556612 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" podStartSLOduration=2.641174171 podStartE2EDuration="4.556593038s" podCreationTimestamp="2026-02-19 05:36:41 +0000 UTC" firstStartedPulling="2026-02-19 05:36:42.684619106 +0000 UTC m=+698.717941675" lastFinishedPulling="2026-02-19 05:36:44.600037933 +0000 UTC m=+700.633360542" observedRunningTime="2026-02-19 05:36:45.531635241 +0000 UTC m=+701.564957810" watchObservedRunningTime="2026-02-19 05:36:45.556593038 +0000 UTC m=+701.589915607" Feb 19 05:36:46 crc kubenswrapper[5012]: I0219 05:36:46.521810 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" event={"ID":"0aad4d6c-fc60-4843-b21b-d4ad6d552d5f","Type":"ContainerStarted","Data":"6db0c9354d191485f9fff11c29fd1e9e6ec2db9b0e8c7415c2932b0540d693c1"} Feb 19 05:36:46 crc kubenswrapper[5012]: I0219 05:36:46.540356 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-tdz8p" podStartSLOduration=2.825210211 podStartE2EDuration="5.540337674s" podCreationTimestamp="2026-02-19 05:36:41 +0000 UTC" firstStartedPulling="2026-02-19 05:36:41.873583305 +0000 UTC m=+697.906905874" lastFinishedPulling="2026-02-19 05:36:44.588710728 +0000 UTC m=+700.622033337" observedRunningTime="2026-02-19 05:36:45.55747233 +0000 UTC m=+701.590794969" watchObservedRunningTime="2026-02-19 05:36:46.540337674 +0000 UTC m=+702.573660243" Feb 19 05:36:48 crc kubenswrapper[5012]: I0219 05:36:48.544959 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-hn274" event={"ID":"91d45b3f-23b3-4342-8168-667f665ffe82","Type":"ContainerStarted","Data":"f40e3f9611b39d0ed2dd3c5b67666a3f2b2b605b17b23f577f1f84ca3c59c1ff"} Feb 19 05:36:48 crc kubenswrapper[5012]: I0219 05:36:48.572983 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-zvl62" podStartSLOduration=4.323630056 podStartE2EDuration="7.572954043s" podCreationTimestamp="2026-02-19 05:36:41 +0000 UTC" firstStartedPulling="2026-02-19 05:36:42.806006391 +0000 UTC m=+698.839328970" lastFinishedPulling="2026-02-19 05:36:46.055330358 +0000 UTC m=+702.088652957" observedRunningTime="2026-02-19 05:36:46.544480765 +0000 UTC m=+702.577803334" watchObservedRunningTime="2026-02-19 05:36:48.572954043 +0000 UTC m=+704.606276642" Feb 19 05:36:48 crc kubenswrapper[5012]: I0219 05:36:48.574496 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-hn274" podStartSLOduration=2.4698195 podStartE2EDuration="7.57448375s" podCreationTimestamp="2026-02-19 05:36:41 +0000 UTC" firstStartedPulling="2026-02-19 05:36:42.268390154 +0000 UTC m=+698.301712713" lastFinishedPulling="2026-02-19 05:36:47.373054394 +0000 UTC m=+703.406376963" observedRunningTime="2026-02-19 05:36:48.5724386 +0000 UTC m=+704.605761199" watchObservedRunningTime="2026-02-19 05:36:48.57448375 +0000 UTC m=+704.607806359" Feb 19 05:36:51 crc kubenswrapper[5012]: I0219 05:36:51.883610 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-tdz8p" Feb 19 05:36:52 crc kubenswrapper[5012]: I0219 05:36:52.171945 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:52 crc kubenswrapper[5012]: I0219 05:36:52.172499 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:52 crc kubenswrapper[5012]: I0219 05:36:52.180038 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:52 crc kubenswrapper[5012]: I0219 05:36:52.593907 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6c49886887-28b5c" Feb 19 05:36:52 crc kubenswrapper[5012]: I0219 05:36:52.674069 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-mlxbg"] Feb 19 05:37:02 crc kubenswrapper[5012]: I0219 05:37:02.424549 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mqtfh" Feb 19 05:37:14 crc kubenswrapper[5012]: I0219 05:37:14.437354 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:37:14 crc kubenswrapper[5012]: I0219 05:37:14.438048 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:37:17 crc kubenswrapper[5012]: I0219 05:37:17.750172 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-mlxbg" podUID="5ff8f20f-5302-4b7a-826c-5d557c65c0f3" containerName="console" containerID="cri-o://cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737" gracePeriod=15 Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.192401 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-mlxbg_5ff8f20f-5302-4b7a-826c-5d557c65c0f3/console/0.log" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.192463 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.368668 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-trusted-ca-bundle\") pod \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.368976 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-service-ca\") pod \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.369075 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-oauth-serving-cert\") pod \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.369171 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-config\") pod \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.369294 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-serving-cert\") pod \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.369417 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzxsb\" (UniqueName: \"kubernetes.io/projected/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-kube-api-access-dzxsb\") pod \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.369520 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-oauth-config\") pod \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\" (UID: \"5ff8f20f-5302-4b7a-826c-5d557c65c0f3\") " Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.370071 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-service-ca" (OuterVolumeSpecName: "service-ca") pod "5ff8f20f-5302-4b7a-826c-5d557c65c0f3" (UID: "5ff8f20f-5302-4b7a-826c-5d557c65c0f3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.370381 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5ff8f20f-5302-4b7a-826c-5d557c65c0f3" (UID: "5ff8f20f-5302-4b7a-826c-5d557c65c0f3"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.370602 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-config" (OuterVolumeSpecName: "console-config") pod "5ff8f20f-5302-4b7a-826c-5d557c65c0f3" (UID: "5ff8f20f-5302-4b7a-826c-5d557c65c0f3"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.370952 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5ff8f20f-5302-4b7a-826c-5d557c65c0f3" (UID: "5ff8f20f-5302-4b7a-826c-5d557c65c0f3"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.376084 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5ff8f20f-5302-4b7a-826c-5d557c65c0f3" (UID: "5ff8f20f-5302-4b7a-826c-5d557c65c0f3"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.379107 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-kube-api-access-dzxsb" (OuterVolumeSpecName: "kube-api-access-dzxsb") pod "5ff8f20f-5302-4b7a-826c-5d557c65c0f3" (UID: "5ff8f20f-5302-4b7a-826c-5d557c65c0f3"). InnerVolumeSpecName "kube-api-access-dzxsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.380479 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5ff8f20f-5302-4b7a-826c-5d557c65c0f3" (UID: "5ff8f20f-5302-4b7a-826c-5d557c65c0f3"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.470698 5012 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.470747 5012 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.470756 5012 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.470764 5012 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.470772 5012 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.470782 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzxsb\" (UniqueName: \"kubernetes.io/projected/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-kube-api-access-dzxsb\") on node \"crc\" DevicePath \"\"" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.470794 5012 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ff8f20f-5302-4b7a-826c-5d557c65c0f3-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.794732 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-mlxbg_5ff8f20f-5302-4b7a-826c-5d557c65c0f3/console/0.log" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.794792 5012 generic.go:334] "Generic (PLEG): container finished" podID="5ff8f20f-5302-4b7a-826c-5d557c65c0f3" containerID="cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737" exitCode=2 Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.794824 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mlxbg" event={"ID":"5ff8f20f-5302-4b7a-826c-5d557c65c0f3","Type":"ContainerDied","Data":"cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737"} Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.794852 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mlxbg" event={"ID":"5ff8f20f-5302-4b7a-826c-5d557c65c0f3","Type":"ContainerDied","Data":"a6f2569260b6928a746b0541013161dd385ea0ab1aad5d9524e6efae3299b362"} Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.794869 5012 scope.go:117] "RemoveContainer" containerID="cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.794918 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mlxbg" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.814647 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-mlxbg"] Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.818879 5012 scope.go:117] "RemoveContainer" containerID="cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.819222 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-mlxbg"] Feb 19 05:37:18 crc kubenswrapper[5012]: E0219 05:37:18.819369 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737\": container with ID starting with cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737 not found: ID does not exist" containerID="cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737" Feb 19 05:37:18 crc kubenswrapper[5012]: I0219 05:37:18.819416 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737"} err="failed to get container status \"cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737\": rpc error: code = NotFound desc = could not find container \"cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737\": container with ID starting with cf01683208ca15e148a1707265122b86aa4d84685c0cfc0bf3aefd130e5e8737 not found: ID does not exist" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.277654 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf"] Feb 19 05:37:19 crc kubenswrapper[5012]: E0219 05:37:19.278016 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ff8f20f-5302-4b7a-826c-5d557c65c0f3" containerName="console" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.278041 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ff8f20f-5302-4b7a-826c-5d557c65c0f3" containerName="console" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.278228 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ff8f20f-5302-4b7a-826c-5d557c65c0f3" containerName="console" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.279610 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.283123 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.290793 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf"] Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.384543 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.384606 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.384873 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwn4s\" (UniqueName: \"kubernetes.io/projected/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-kube-api-access-fwn4s\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.486690 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.486765 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.486867 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwn4s\" (UniqueName: \"kubernetes.io/projected/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-kube-api-access-fwn4s\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.487263 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.487811 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.525369 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwn4s\" (UniqueName: \"kubernetes.io/projected/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-kube-api-access-fwn4s\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.614616 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:19 crc kubenswrapper[5012]: I0219 05:37:19.879898 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf"] Feb 19 05:37:20 crc kubenswrapper[5012]: I0219 05:37:20.715264 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ff8f20f-5302-4b7a-826c-5d557c65c0f3" path="/var/lib/kubelet/pods/5ff8f20f-5302-4b7a-826c-5d557c65c0f3/volumes" Feb 19 05:37:20 crc kubenswrapper[5012]: I0219 05:37:20.818183 5012 generic.go:334] "Generic (PLEG): container finished" podID="ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" containerID="aadfaec376e2f374a0e410d704abfdd0041449e55a2f0296a55c1ca8809f871a" exitCode=0 Feb 19 05:37:20 crc kubenswrapper[5012]: I0219 05:37:20.818369 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" event={"ID":"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778","Type":"ContainerDied","Data":"aadfaec376e2f374a0e410d704abfdd0041449e55a2f0296a55c1ca8809f871a"} Feb 19 05:37:20 crc kubenswrapper[5012]: I0219 05:37:20.818867 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" event={"ID":"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778","Type":"ContainerStarted","Data":"532effff0272a6ed4a1427a3800addb685987ce9a23630f3d1b4f93cbcb8aa92"} Feb 19 05:37:22 crc kubenswrapper[5012]: I0219 05:37:22.860795 5012 generic.go:334] "Generic (PLEG): container finished" podID="ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" containerID="f862f54b2a8ea948ed3610af72b7b7d0c81d24236c009cd6cacf5e25e5e6fa5e" exitCode=0 Feb 19 05:37:22 crc kubenswrapper[5012]: I0219 05:37:22.860923 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" event={"ID":"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778","Type":"ContainerDied","Data":"f862f54b2a8ea948ed3610af72b7b7d0c81d24236c009cd6cacf5e25e5e6fa5e"} Feb 19 05:37:23 crc kubenswrapper[5012]: I0219 05:37:23.906673 5012 generic.go:334] "Generic (PLEG): container finished" podID="ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" containerID="cad0773f4b16d786fc1e16f199c86852cd659377043048ad5d5ae36732edd2af" exitCode=0 Feb 19 05:37:23 crc kubenswrapper[5012]: I0219 05:37:23.906734 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" event={"ID":"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778","Type":"ContainerDied","Data":"cad0773f4b16d786fc1e16f199c86852cd659377043048ad5d5ae36732edd2af"} Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.301166 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.472698 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwn4s\" (UniqueName: \"kubernetes.io/projected/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-kube-api-access-fwn4s\") pod \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.472849 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-bundle\") pod \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.472925 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-util\") pod \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\" (UID: \"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778\") " Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.474240 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-bundle" (OuterVolumeSpecName: "bundle") pod "ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" (UID: "ee5d7005-f5b3-4a68-8ae6-e74db1bd0778"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.485154 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-kube-api-access-fwn4s" (OuterVolumeSpecName: "kube-api-access-fwn4s") pod "ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" (UID: "ee5d7005-f5b3-4a68-8ae6-e74db1bd0778"). InnerVolumeSpecName "kube-api-access-fwn4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.572644 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-util" (OuterVolumeSpecName: "util") pod "ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" (UID: "ee5d7005-f5b3-4a68-8ae6-e74db1bd0778"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.575445 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwn4s\" (UniqueName: \"kubernetes.io/projected/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-kube-api-access-fwn4s\") on node \"crc\" DevicePath \"\"" Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.575505 5012 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.576191 5012 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee5d7005-f5b3-4a68-8ae6-e74db1bd0778-util\") on node \"crc\" DevicePath \"\"" Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.931743 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" event={"ID":"ee5d7005-f5b3-4a68-8ae6-e74db1bd0778","Type":"ContainerDied","Data":"532effff0272a6ed4a1427a3800addb685987ce9a23630f3d1b4f93cbcb8aa92"} Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.931803 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="532effff0272a6ed4a1427a3800addb685987ce9a23630f3d1b4f93cbcb8aa92" Feb 19 05:37:25 crc kubenswrapper[5012]: I0219 05:37:25.931845 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.802905 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj"] Feb 19 05:37:36 crc kubenswrapper[5012]: E0219 05:37:36.803759 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" containerName="pull" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.803773 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" containerName="pull" Feb 19 05:37:36 crc kubenswrapper[5012]: E0219 05:37:36.803793 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" containerName="util" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.803801 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" containerName="util" Feb 19 05:37:36 crc kubenswrapper[5012]: E0219 05:37:36.803816 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" containerName="extract" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.803824 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" containerName="extract" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.803954 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee5d7005-f5b3-4a68-8ae6-e74db1bd0778" containerName="extract" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.804463 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.807757 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.807969 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.808151 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-gzbbd" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.809534 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.811727 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.819703 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj"] Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.931085 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05b78fff-bf4d-4cd6-aba9-b74303a5dd50-apiservice-cert\") pod \"metallb-operator-controller-manager-558c5c4774-9r4gj\" (UID: \"05b78fff-bf4d-4cd6-aba9-b74303a5dd50\") " pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.931262 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hssqx\" (UniqueName: \"kubernetes.io/projected/05b78fff-bf4d-4cd6-aba9-b74303a5dd50-kube-api-access-hssqx\") pod \"metallb-operator-controller-manager-558c5c4774-9r4gj\" (UID: \"05b78fff-bf4d-4cd6-aba9-b74303a5dd50\") " pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:36 crc kubenswrapper[5012]: I0219 05:37:36.931374 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05b78fff-bf4d-4cd6-aba9-b74303a5dd50-webhook-cert\") pod \"metallb-operator-controller-manager-558c5c4774-9r4gj\" (UID: \"05b78fff-bf4d-4cd6-aba9-b74303a5dd50\") " pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.032107 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05b78fff-bf4d-4cd6-aba9-b74303a5dd50-webhook-cert\") pod \"metallb-operator-controller-manager-558c5c4774-9r4gj\" (UID: \"05b78fff-bf4d-4cd6-aba9-b74303a5dd50\") " pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.032171 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05b78fff-bf4d-4cd6-aba9-b74303a5dd50-apiservice-cert\") pod \"metallb-operator-controller-manager-558c5c4774-9r4gj\" (UID: \"05b78fff-bf4d-4cd6-aba9-b74303a5dd50\") " pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.032211 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hssqx\" (UniqueName: \"kubernetes.io/projected/05b78fff-bf4d-4cd6-aba9-b74303a5dd50-kube-api-access-hssqx\") pod \"metallb-operator-controller-manager-558c5c4774-9r4gj\" (UID: \"05b78fff-bf4d-4cd6-aba9-b74303a5dd50\") " pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.037989 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05b78fff-bf4d-4cd6-aba9-b74303a5dd50-webhook-cert\") pod \"metallb-operator-controller-manager-558c5c4774-9r4gj\" (UID: \"05b78fff-bf4d-4cd6-aba9-b74303a5dd50\") " pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.038716 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05b78fff-bf4d-4cd6-aba9-b74303a5dd50-apiservice-cert\") pod \"metallb-operator-controller-manager-558c5c4774-9r4gj\" (UID: \"05b78fff-bf4d-4cd6-aba9-b74303a5dd50\") " pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.060121 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hssqx\" (UniqueName: \"kubernetes.io/projected/05b78fff-bf4d-4cd6-aba9-b74303a5dd50-kube-api-access-hssqx\") pod \"metallb-operator-controller-manager-558c5c4774-9r4gj\" (UID: \"05b78fff-bf4d-4cd6-aba9-b74303a5dd50\") " pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.118431 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.142895 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74"] Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.143561 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.145928 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.145947 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.146641 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-wwzlh" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.158992 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74"] Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.234790 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79hx9\" (UniqueName: \"kubernetes.io/projected/ec7fdada-6f6e-4d8b-b2e1-c944050c714c-kube-api-access-79hx9\") pod \"metallb-operator-webhook-server-699bc447bd-zqv74\" (UID: \"ec7fdada-6f6e-4d8b-b2e1-c944050c714c\") " pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.235093 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec7fdada-6f6e-4d8b-b2e1-c944050c714c-webhook-cert\") pod \"metallb-operator-webhook-server-699bc447bd-zqv74\" (UID: \"ec7fdada-6f6e-4d8b-b2e1-c944050c714c\") " pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.235126 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ec7fdada-6f6e-4d8b-b2e1-c944050c714c-apiservice-cert\") pod \"metallb-operator-webhook-server-699bc447bd-zqv74\" (UID: \"ec7fdada-6f6e-4d8b-b2e1-c944050c714c\") " pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.335884 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ec7fdada-6f6e-4d8b-b2e1-c944050c714c-apiservice-cert\") pod \"metallb-operator-webhook-server-699bc447bd-zqv74\" (UID: \"ec7fdada-6f6e-4d8b-b2e1-c944050c714c\") " pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.335956 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79hx9\" (UniqueName: \"kubernetes.io/projected/ec7fdada-6f6e-4d8b-b2e1-c944050c714c-kube-api-access-79hx9\") pod \"metallb-operator-webhook-server-699bc447bd-zqv74\" (UID: \"ec7fdada-6f6e-4d8b-b2e1-c944050c714c\") " pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.335999 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec7fdada-6f6e-4d8b-b2e1-c944050c714c-webhook-cert\") pod \"metallb-operator-webhook-server-699bc447bd-zqv74\" (UID: \"ec7fdada-6f6e-4d8b-b2e1-c944050c714c\") " pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.339392 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ec7fdada-6f6e-4d8b-b2e1-c944050c714c-apiservice-cert\") pod \"metallb-operator-webhook-server-699bc447bd-zqv74\" (UID: \"ec7fdada-6f6e-4d8b-b2e1-c944050c714c\") " pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.339659 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec7fdada-6f6e-4d8b-b2e1-c944050c714c-webhook-cert\") pod \"metallb-operator-webhook-server-699bc447bd-zqv74\" (UID: \"ec7fdada-6f6e-4d8b-b2e1-c944050c714c\") " pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.352057 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79hx9\" (UniqueName: \"kubernetes.io/projected/ec7fdada-6f6e-4d8b-b2e1-c944050c714c-kube-api-access-79hx9\") pod \"metallb-operator-webhook-server-699bc447bd-zqv74\" (UID: \"ec7fdada-6f6e-4d8b-b2e1-c944050c714c\") " pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.486353 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.660829 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj"] Feb 19 05:37:37 crc kubenswrapper[5012]: W0219 05:37:37.669476 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05b78fff_bf4d_4cd6_aba9_b74303a5dd50.slice/crio-db33b2497cfe8e6b4ca0d70590f1535173dd90537b73e610e60b2787be9e73cc WatchSource:0}: Error finding container db33b2497cfe8e6b4ca0d70590f1535173dd90537b73e610e60b2787be9e73cc: Status 404 returned error can't find the container with id db33b2497cfe8e6b4ca0d70590f1535173dd90537b73e610e60b2787be9e73cc Feb 19 05:37:37 crc kubenswrapper[5012]: I0219 05:37:37.761860 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74"] Feb 19 05:37:37 crc kubenswrapper[5012]: W0219 05:37:37.772038 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec7fdada_6f6e_4d8b_b2e1_c944050c714c.slice/crio-d780fe048cc2aa7a52e416bfdb029ca75146724ef3048189b1a29993c260336d WatchSource:0}: Error finding container d780fe048cc2aa7a52e416bfdb029ca75146724ef3048189b1a29993c260336d: Status 404 returned error can't find the container with id d780fe048cc2aa7a52e416bfdb029ca75146724ef3048189b1a29993c260336d Feb 19 05:37:38 crc kubenswrapper[5012]: I0219 05:37:38.007449 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" event={"ID":"ec7fdada-6f6e-4d8b-b2e1-c944050c714c","Type":"ContainerStarted","Data":"d780fe048cc2aa7a52e416bfdb029ca75146724ef3048189b1a29993c260336d"} Feb 19 05:37:38 crc kubenswrapper[5012]: I0219 05:37:38.008563 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" event={"ID":"05b78fff-bf4d-4cd6-aba9-b74303a5dd50","Type":"ContainerStarted","Data":"db33b2497cfe8e6b4ca0d70590f1535173dd90537b73e610e60b2787be9e73cc"} Feb 19 05:37:40 crc kubenswrapper[5012]: I0219 05:37:40.338940 5012 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 05:37:41 crc kubenswrapper[5012]: I0219 05:37:41.077540 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" event={"ID":"05b78fff-bf4d-4cd6-aba9-b74303a5dd50","Type":"ContainerStarted","Data":"b44e1fe3a02a35d922aad5e0d8f95c3d5ff220e4b2b34a031561c4658bb70611"} Feb 19 05:37:41 crc kubenswrapper[5012]: I0219 05:37:41.077834 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:37:41 crc kubenswrapper[5012]: I0219 05:37:41.102856 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" podStartSLOduration=1.968173197 podStartE2EDuration="5.102837532s" podCreationTimestamp="2026-02-19 05:37:36 +0000 UTC" firstStartedPulling="2026-02-19 05:37:37.671247149 +0000 UTC m=+753.704569728" lastFinishedPulling="2026-02-19 05:37:40.805911494 +0000 UTC m=+756.839234063" observedRunningTime="2026-02-19 05:37:41.09539801 +0000 UTC m=+757.128720629" watchObservedRunningTime="2026-02-19 05:37:41.102837532 +0000 UTC m=+757.136160101" Feb 19 05:37:43 crc kubenswrapper[5012]: I0219 05:37:43.093610 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" event={"ID":"ec7fdada-6f6e-4d8b-b2e1-c944050c714c","Type":"ContainerStarted","Data":"496af1f7a9d08f212da3074df25922935e599f28b8fe04441d505db64054be82"} Feb 19 05:37:43 crc kubenswrapper[5012]: I0219 05:37:43.094018 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:37:43 crc kubenswrapper[5012]: I0219 05:37:43.123986 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" podStartSLOduration=1.354981662 podStartE2EDuration="6.12396552s" podCreationTimestamp="2026-02-19 05:37:37 +0000 UTC" firstStartedPulling="2026-02-19 05:37:37.77521246 +0000 UTC m=+753.808535029" lastFinishedPulling="2026-02-19 05:37:42.544196318 +0000 UTC m=+758.577518887" observedRunningTime="2026-02-19 05:37:43.117209695 +0000 UTC m=+759.150532284" watchObservedRunningTime="2026-02-19 05:37:43.12396552 +0000 UTC m=+759.157288089" Feb 19 05:37:44 crc kubenswrapper[5012]: I0219 05:37:44.431216 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:37:44 crc kubenswrapper[5012]: I0219 05:37:44.431746 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:37:44 crc kubenswrapper[5012]: I0219 05:37:44.431814 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:37:44 crc kubenswrapper[5012]: I0219 05:37:44.433063 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2fa30f17f6fec33303fdb3b3cb4c275384acd11d008a1c182ee7a051d5288089"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 05:37:44 crc kubenswrapper[5012]: I0219 05:37:44.433214 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://2fa30f17f6fec33303fdb3b3cb4c275384acd11d008a1c182ee7a051d5288089" gracePeriod=600 Feb 19 05:37:45 crc kubenswrapper[5012]: I0219 05:37:45.109976 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="2fa30f17f6fec33303fdb3b3cb4c275384acd11d008a1c182ee7a051d5288089" exitCode=0 Feb 19 05:37:45 crc kubenswrapper[5012]: I0219 05:37:45.110439 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"2fa30f17f6fec33303fdb3b3cb4c275384acd11d008a1c182ee7a051d5288089"} Feb 19 05:37:45 crc kubenswrapper[5012]: I0219 05:37:45.110507 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"f6b4f2485162f8c24d6693d845318234656e6a8c97d49d2e72f4427654fa319a"} Feb 19 05:37:45 crc kubenswrapper[5012]: I0219 05:37:45.110530 5012 scope.go:117] "RemoveContainer" containerID="8431b8eb7363f7603ff116fd5d3f9ab3ed3f378fbd36db4efaaa1521cb246ddd" Feb 19 05:37:50 crc kubenswrapper[5012]: I0219 05:37:50.926111 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vd2gr"] Feb 19 05:37:50 crc kubenswrapper[5012]: I0219 05:37:50.927528 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:50 crc kubenswrapper[5012]: I0219 05:37:50.945551 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vd2gr"] Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.080724 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5bxp\" (UniqueName: \"kubernetes.io/projected/36832a35-ae82-46eb-89dd-9e1a1a58fca1-kube-api-access-x5bxp\") pod \"redhat-operators-vd2gr\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.081144 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-utilities\") pod \"redhat-operators-vd2gr\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.081286 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-catalog-content\") pod \"redhat-operators-vd2gr\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.182813 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5bxp\" (UniqueName: \"kubernetes.io/projected/36832a35-ae82-46eb-89dd-9e1a1a58fca1-kube-api-access-x5bxp\") pod \"redhat-operators-vd2gr\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.182916 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-utilities\") pod \"redhat-operators-vd2gr\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.182954 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-catalog-content\") pod \"redhat-operators-vd2gr\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.183768 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-catalog-content\") pod \"redhat-operators-vd2gr\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.183784 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-utilities\") pod \"redhat-operators-vd2gr\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.212562 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5bxp\" (UniqueName: \"kubernetes.io/projected/36832a35-ae82-46eb-89dd-9e1a1a58fca1-kube-api-access-x5bxp\") pod \"redhat-operators-vd2gr\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.289590 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:37:51 crc kubenswrapper[5012]: I0219 05:37:51.732381 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vd2gr"] Feb 19 05:37:52 crc kubenswrapper[5012]: I0219 05:37:52.181361 5012 generic.go:334] "Generic (PLEG): container finished" podID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerID="e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395" exitCode=0 Feb 19 05:37:52 crc kubenswrapper[5012]: I0219 05:37:52.181614 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd2gr" event={"ID":"36832a35-ae82-46eb-89dd-9e1a1a58fca1","Type":"ContainerDied","Data":"e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395"} Feb 19 05:37:52 crc kubenswrapper[5012]: I0219 05:37:52.181639 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd2gr" event={"ID":"36832a35-ae82-46eb-89dd-9e1a1a58fca1","Type":"ContainerStarted","Data":"12d34e10928c2dfbd4f6f549b88f76696b6d8930975e76bce01f89d79536c334"} Feb 19 05:37:54 crc kubenswrapper[5012]: I0219 05:37:54.197727 5012 generic.go:334] "Generic (PLEG): container finished" podID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerID="1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9" exitCode=0 Feb 19 05:37:54 crc kubenswrapper[5012]: I0219 05:37:54.197780 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd2gr" event={"ID":"36832a35-ae82-46eb-89dd-9e1a1a58fca1","Type":"ContainerDied","Data":"1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9"} Feb 19 05:37:55 crc kubenswrapper[5012]: I0219 05:37:55.208616 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd2gr" event={"ID":"36832a35-ae82-46eb-89dd-9e1a1a58fca1","Type":"ContainerStarted","Data":"05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c"} Feb 19 05:37:55 crc kubenswrapper[5012]: I0219 05:37:55.229491 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vd2gr" podStartSLOduration=2.550997324 podStartE2EDuration="5.229451053s" podCreationTimestamp="2026-02-19 05:37:50 +0000 UTC" firstStartedPulling="2026-02-19 05:37:52.183032117 +0000 UTC m=+768.216354686" lastFinishedPulling="2026-02-19 05:37:54.861485836 +0000 UTC m=+770.894808415" observedRunningTime="2026-02-19 05:37:55.22646012 +0000 UTC m=+771.259782729" watchObservedRunningTime="2026-02-19 05:37:55.229451053 +0000 UTC m=+771.262773632" Feb 19 05:37:57 crc kubenswrapper[5012]: I0219 05:37:57.491722 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-699bc447bd-zqv74" Feb 19 05:38:01 crc kubenswrapper[5012]: I0219 05:38:01.290125 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:38:01 crc kubenswrapper[5012]: I0219 05:38:01.290481 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:38:02 crc kubenswrapper[5012]: I0219 05:38:02.348892 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vd2gr" podUID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerName="registry-server" probeResult="failure" output=< Feb 19 05:38:02 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 05:38:02 crc kubenswrapper[5012]: > Feb 19 05:38:11 crc kubenswrapper[5012]: I0219 05:38:11.362684 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:38:11 crc kubenswrapper[5012]: I0219 05:38:11.427726 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:38:11 crc kubenswrapper[5012]: I0219 05:38:11.611276 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vd2gr"] Feb 19 05:38:13 crc kubenswrapper[5012]: I0219 05:38:13.356411 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vd2gr" podUID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerName="registry-server" containerID="cri-o://05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c" gracePeriod=2 Feb 19 05:38:13 crc kubenswrapper[5012]: I0219 05:38:13.880433 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.034517 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5bxp\" (UniqueName: \"kubernetes.io/projected/36832a35-ae82-46eb-89dd-9e1a1a58fca1-kube-api-access-x5bxp\") pod \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.034578 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-catalog-content\") pod \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.034633 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-utilities\") pod \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\" (UID: \"36832a35-ae82-46eb-89dd-9e1a1a58fca1\") " Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.035771 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-utilities" (OuterVolumeSpecName: "utilities") pod "36832a35-ae82-46eb-89dd-9e1a1a58fca1" (UID: "36832a35-ae82-46eb-89dd-9e1a1a58fca1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.043508 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36832a35-ae82-46eb-89dd-9e1a1a58fca1-kube-api-access-x5bxp" (OuterVolumeSpecName: "kube-api-access-x5bxp") pod "36832a35-ae82-46eb-89dd-9e1a1a58fca1" (UID: "36832a35-ae82-46eb-89dd-9e1a1a58fca1"). InnerVolumeSpecName "kube-api-access-x5bxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.135988 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5bxp\" (UniqueName: \"kubernetes.io/projected/36832a35-ae82-46eb-89dd-9e1a1a58fca1-kube-api-access-x5bxp\") on node \"crc\" DevicePath \"\"" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.136020 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.175556 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36832a35-ae82-46eb-89dd-9e1a1a58fca1" (UID: "36832a35-ae82-46eb-89dd-9e1a1a58fca1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.237345 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36832a35-ae82-46eb-89dd-9e1a1a58fca1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.368709 5012 generic.go:334] "Generic (PLEG): container finished" podID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerID="05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c" exitCode=0 Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.368776 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd2gr" event={"ID":"36832a35-ae82-46eb-89dd-9e1a1a58fca1","Type":"ContainerDied","Data":"05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c"} Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.368838 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vd2gr" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.368866 5012 scope.go:117] "RemoveContainer" containerID="05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.368846 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd2gr" event={"ID":"36832a35-ae82-46eb-89dd-9e1a1a58fca1","Type":"ContainerDied","Data":"12d34e10928c2dfbd4f6f549b88f76696b6d8930975e76bce01f89d79536c334"} Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.397661 5012 scope.go:117] "RemoveContainer" containerID="1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.422367 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vd2gr"] Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.430671 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vd2gr"] Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.441118 5012 scope.go:117] "RemoveContainer" containerID="e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.471375 5012 scope.go:117] "RemoveContainer" containerID="05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c" Feb 19 05:38:14 crc kubenswrapper[5012]: E0219 05:38:14.471948 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c\": container with ID starting with 05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c not found: ID does not exist" containerID="05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.472001 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c"} err="failed to get container status \"05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c\": rpc error: code = NotFound desc = could not find container \"05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c\": container with ID starting with 05f6ca2a9ec51ebc604c97f05eacd29baa7e16435c8034776f72ada2dc83857c not found: ID does not exist" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.472049 5012 scope.go:117] "RemoveContainer" containerID="1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9" Feb 19 05:38:14 crc kubenswrapper[5012]: E0219 05:38:14.473158 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9\": container with ID starting with 1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9 not found: ID does not exist" containerID="1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.473257 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9"} err="failed to get container status \"1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9\": rpc error: code = NotFound desc = could not find container \"1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9\": container with ID starting with 1b5e1b4dac5d306b8aecdbbcfb00164d1ef54e4025095102c577483c86c090a9 not found: ID does not exist" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.473339 5012 scope.go:117] "RemoveContainer" containerID="e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395" Feb 19 05:38:14 crc kubenswrapper[5012]: E0219 05:38:14.473978 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395\": container with ID starting with e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395 not found: ID does not exist" containerID="e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.474020 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395"} err="failed to get container status \"e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395\": rpc error: code = NotFound desc = could not find container \"e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395\": container with ID starting with e0c5ff5bb161a9c4cd203ca86dbf6a2b2648eebe721d486e8ab8270513202395 not found: ID does not exist" Feb 19 05:38:14 crc kubenswrapper[5012]: I0219 05:38:14.716595 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" path="/var/lib/kubelet/pods/36832a35-ae82-46eb-89dd-9e1a1a58fca1/volumes" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.121946 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-558c5c4774-9r4gj" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.890886 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-4d76m"] Feb 19 05:38:17 crc kubenswrapper[5012]: E0219 05:38:17.891428 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerName="extract-content" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.891460 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerName="extract-content" Feb 19 05:38:17 crc kubenswrapper[5012]: E0219 05:38:17.891480 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerName="registry-server" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.891494 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerName="registry-server" Feb 19 05:38:17 crc kubenswrapper[5012]: E0219 05:38:17.891521 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerName="extract-utilities" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.891535 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerName="extract-utilities" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.891766 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="36832a35-ae82-46eb-89dd-9e1a1a58fca1" containerName="registry-server" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.900018 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84"] Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.900739 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.901039 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.905036 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.905244 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.905543 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-clrpz" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.905757 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.924820 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84"] Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.982289 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-87ct4"] Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.983747 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-87ct4" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.987234 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-pdjcn" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.987548 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.987748 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.988866 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.990528 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-c4jbq"] Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.991513 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:17 crc kubenswrapper[5012]: I0219 05:38:17.992604 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006201 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vmqw\" (UniqueName: \"kubernetes.io/projected/48b2548c-eb36-4c42-a84f-2d3f2084a46f-kube-api-access-7vmqw\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006240 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-frr-sockets\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006265 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe949ecf-1cb7-47c7-b196-d4851f142c5f-cert\") pod \"controller-69bbfbf88f-c4jbq\" (UID: \"fe949ecf-1cb7-47c7-b196-d4851f142c5f\") " pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006287 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/82cb6684-3937-45f8-9f18-56940e88f480-metallb-excludel2\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006319 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4882d\" (UniqueName: \"kubernetes.io/projected/82cb6684-3937-45f8-9f18-56940e88f480-kube-api-access-4882d\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006335 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8lbv\" (UniqueName: \"kubernetes.io/projected/fe949ecf-1cb7-47c7-b196-d4851f142c5f-kube-api-access-v8lbv\") pod \"controller-69bbfbf88f-c4jbq\" (UID: \"fe949ecf-1cb7-47c7-b196-d4851f142c5f\") " pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006356 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/431a9bf4-479e-4255-9664-554c80fa4376-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-hdb84\" (UID: \"431a9bf4-479e-4255-9664-554c80fa4376\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006376 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlfvs\" (UniqueName: \"kubernetes.io/projected/431a9bf4-479e-4255-9664-554c80fa4376-kube-api-access-jlfvs\") pod \"frr-k8s-webhook-server-78b44bf5bb-hdb84\" (UID: \"431a9bf4-479e-4255-9664-554c80fa4376\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006397 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-metrics\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006413 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-frr-conf\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006434 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe949ecf-1cb7-47c7-b196-d4851f142c5f-metrics-certs\") pod \"controller-69bbfbf88f-c4jbq\" (UID: \"fe949ecf-1cb7-47c7-b196-d4851f142c5f\") " pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006465 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/48b2548c-eb36-4c42-a84f-2d3f2084a46f-frr-startup\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006482 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-reloader\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006515 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-metrics-certs\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006531 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.006549 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48b2548c-eb36-4c42-a84f-2d3f2084a46f-metrics-certs\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.008417 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-c4jbq"] Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107538 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-metrics-certs\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107592 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107613 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48b2548c-eb36-4c42-a84f-2d3f2084a46f-metrics-certs\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107640 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vmqw\" (UniqueName: \"kubernetes.io/projected/48b2548c-eb36-4c42-a84f-2d3f2084a46f-kube-api-access-7vmqw\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107680 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-frr-sockets\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107711 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe949ecf-1cb7-47c7-b196-d4851f142c5f-cert\") pod \"controller-69bbfbf88f-c4jbq\" (UID: \"fe949ecf-1cb7-47c7-b196-d4851f142c5f\") " pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107737 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4882d\" (UniqueName: \"kubernetes.io/projected/82cb6684-3937-45f8-9f18-56940e88f480-kube-api-access-4882d\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107755 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/82cb6684-3937-45f8-9f18-56940e88f480-metallb-excludel2\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107780 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8lbv\" (UniqueName: \"kubernetes.io/projected/fe949ecf-1cb7-47c7-b196-d4851f142c5f-kube-api-access-v8lbv\") pod \"controller-69bbfbf88f-c4jbq\" (UID: \"fe949ecf-1cb7-47c7-b196-d4851f142c5f\") " pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107812 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/431a9bf4-479e-4255-9664-554c80fa4376-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-hdb84\" (UID: \"431a9bf4-479e-4255-9664-554c80fa4376\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" Feb 19 05:38:18 crc kubenswrapper[5012]: E0219 05:38:18.107811 5012 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107837 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlfvs\" (UniqueName: \"kubernetes.io/projected/431a9bf4-479e-4255-9664-554c80fa4376-kube-api-access-jlfvs\") pod \"frr-k8s-webhook-server-78b44bf5bb-hdb84\" (UID: \"431a9bf4-479e-4255-9664-554c80fa4376\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107868 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-metrics\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: E0219 05:38:18.107905 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist podName:82cb6684-3937-45f8-9f18-56940e88f480 nodeName:}" failed. No retries permitted until 2026-02-19 05:38:18.607878983 +0000 UTC m=+794.641201552 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist") pod "speaker-87ct4" (UID: "82cb6684-3937-45f8-9f18-56940e88f480") : secret "metallb-memberlist" not found Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.107942 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-frr-conf\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.108026 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe949ecf-1cb7-47c7-b196-d4851f142c5f-metrics-certs\") pod \"controller-69bbfbf88f-c4jbq\" (UID: \"fe949ecf-1cb7-47c7-b196-d4851f142c5f\") " pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.108049 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/48b2548c-eb36-4c42-a84f-2d3f2084a46f-frr-startup\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.108068 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-reloader\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.108277 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-metrics\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: E0219 05:38:18.108485 5012 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 19 05:38:18 crc kubenswrapper[5012]: E0219 05:38:18.108606 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48b2548c-eb36-4c42-a84f-2d3f2084a46f-metrics-certs podName:48b2548c-eb36-4c42-a84f-2d3f2084a46f nodeName:}" failed. No retries permitted until 2026-02-19 05:38:18.60857688 +0000 UTC m=+794.641899489 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/48b2548c-eb36-4c42-a84f-2d3f2084a46f-metrics-certs") pod "frr-k8s-4d76m" (UID: "48b2548c-eb36-4c42-a84f-2d3f2084a46f") : secret "frr-k8s-certs-secret" not found Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.108683 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-frr-sockets\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: E0219 05:38:18.108686 5012 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 19 05:38:18 crc kubenswrapper[5012]: E0219 05:38:18.108753 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe949ecf-1cb7-47c7-b196-d4851f142c5f-metrics-certs podName:fe949ecf-1cb7-47c7-b196-d4851f142c5f nodeName:}" failed. No retries permitted until 2026-02-19 05:38:18.608742184 +0000 UTC m=+794.642064753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fe949ecf-1cb7-47c7-b196-d4851f142c5f-metrics-certs") pod "controller-69bbfbf88f-c4jbq" (UID: "fe949ecf-1cb7-47c7-b196-d4851f142c5f") : secret "controller-certs-secret" not found Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.109010 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-frr-conf\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.109048 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/48b2548c-eb36-4c42-a84f-2d3f2084a46f-reloader\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.109112 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/82cb6684-3937-45f8-9f18-56940e88f480-metallb-excludel2\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.109461 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/48b2548c-eb36-4c42-a84f-2d3f2084a46f-frr-startup\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.112754 5012 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.114698 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-metrics-certs\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.116787 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/431a9bf4-479e-4255-9664-554c80fa4376-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-hdb84\" (UID: \"431a9bf4-479e-4255-9664-554c80fa4376\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.121612 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe949ecf-1cb7-47c7-b196-d4851f142c5f-cert\") pod \"controller-69bbfbf88f-c4jbq\" (UID: \"fe949ecf-1cb7-47c7-b196-d4851f142c5f\") " pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.125985 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vmqw\" (UniqueName: \"kubernetes.io/projected/48b2548c-eb36-4c42-a84f-2d3f2084a46f-kube-api-access-7vmqw\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.129015 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8lbv\" (UniqueName: \"kubernetes.io/projected/fe949ecf-1cb7-47c7-b196-d4851f142c5f-kube-api-access-v8lbv\") pod \"controller-69bbfbf88f-c4jbq\" (UID: \"fe949ecf-1cb7-47c7-b196-d4851f142c5f\") " pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.131348 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4882d\" (UniqueName: \"kubernetes.io/projected/82cb6684-3937-45f8-9f18-56940e88f480-kube-api-access-4882d\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.132638 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlfvs\" (UniqueName: \"kubernetes.io/projected/431a9bf4-479e-4255-9664-554c80fa4376-kube-api-access-jlfvs\") pod \"frr-k8s-webhook-server-78b44bf5bb-hdb84\" (UID: \"431a9bf4-479e-4255-9664-554c80fa4376\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.232640 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.453041 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84"] Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.612577 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe949ecf-1cb7-47c7-b196-d4851f142c5f-metrics-certs\") pod \"controller-69bbfbf88f-c4jbq\" (UID: \"fe949ecf-1cb7-47c7-b196-d4851f142c5f\") " pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.613032 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.613052 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48b2548c-eb36-4c42-a84f-2d3f2084a46f-metrics-certs\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: E0219 05:38:18.613887 5012 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 05:38:18 crc kubenswrapper[5012]: E0219 05:38:18.614072 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist podName:82cb6684-3937-45f8-9f18-56940e88f480 nodeName:}" failed. No retries permitted until 2026-02-19 05:38:19.614031364 +0000 UTC m=+795.647353963 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist") pod "speaker-87ct4" (UID: "82cb6684-3937-45f8-9f18-56940e88f480") : secret "metallb-memberlist" not found Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.621358 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe949ecf-1cb7-47c7-b196-d4851f142c5f-metrics-certs\") pod \"controller-69bbfbf88f-c4jbq\" (UID: \"fe949ecf-1cb7-47c7-b196-d4851f142c5f\") " pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.621587 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/48b2548c-eb36-4c42-a84f-2d3f2084a46f-metrics-certs\") pod \"frr-k8s-4d76m\" (UID: \"48b2548c-eb36-4c42-a84f-2d3f2084a46f\") " pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.823571 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:18 crc kubenswrapper[5012]: I0219 05:38:18.913892 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:19 crc kubenswrapper[5012]: I0219 05:38:19.440224 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4d76m" event={"ID":"48b2548c-eb36-4c42-a84f-2d3f2084a46f","Type":"ContainerStarted","Data":"d9f3573276cb5e4a080d5a4701db3de93e1051e5801d64937ca8e0c702fc27bb"} Feb 19 05:38:19 crc kubenswrapper[5012]: I0219 05:38:19.441166 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" event={"ID":"431a9bf4-479e-4255-9664-554c80fa4376","Type":"ContainerStarted","Data":"660ea8a59313a5f500662062a7161875c8a8cdb9f34620a12910f8f57a04caa8"} Feb 19 05:38:19 crc kubenswrapper[5012]: I0219 05:38:19.484790 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-c4jbq"] Feb 19 05:38:19 crc kubenswrapper[5012]: I0219 05:38:19.629042 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:19 crc kubenswrapper[5012]: E0219 05:38:19.629392 5012 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 05:38:19 crc kubenswrapper[5012]: E0219 05:38:19.629512 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist podName:82cb6684-3937-45f8-9f18-56940e88f480 nodeName:}" failed. No retries permitted until 2026-02-19 05:38:21.629484591 +0000 UTC m=+797.662807200 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist") pod "speaker-87ct4" (UID: "82cb6684-3937-45f8-9f18-56940e88f480") : secret "metallb-memberlist" not found Feb 19 05:38:20 crc kubenswrapper[5012]: I0219 05:38:20.460854 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-c4jbq" event={"ID":"fe949ecf-1cb7-47c7-b196-d4851f142c5f","Type":"ContainerStarted","Data":"0d3df4829290d2c587ab8aa88f9b2bceb6740e2693ceec4a59b5bf62f38e40b7"} Feb 19 05:38:20 crc kubenswrapper[5012]: I0219 05:38:20.460897 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-c4jbq" event={"ID":"fe949ecf-1cb7-47c7-b196-d4851f142c5f","Type":"ContainerStarted","Data":"07ef5bbedac13bf3c83e38b89ace08f3941c8e5d8ed16da0452b817d5d954270"} Feb 19 05:38:20 crc kubenswrapper[5012]: I0219 05:38:20.460909 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-c4jbq" event={"ID":"fe949ecf-1cb7-47c7-b196-d4851f142c5f","Type":"ContainerStarted","Data":"c29bf2bcb45cf3c033aec0797b2424b729c02eeb92df71b09842dfb40810b852"} Feb 19 05:38:20 crc kubenswrapper[5012]: I0219 05:38:20.461843 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:20 crc kubenswrapper[5012]: I0219 05:38:20.491287 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-c4jbq" podStartSLOduration=3.491267369 podStartE2EDuration="3.491267369s" podCreationTimestamp="2026-02-19 05:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:38:20.48063401 +0000 UTC m=+796.513956579" watchObservedRunningTime="2026-02-19 05:38:20.491267369 +0000 UTC m=+796.524589938" Feb 19 05:38:21 crc kubenswrapper[5012]: I0219 05:38:21.656009 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:21 crc kubenswrapper[5012]: I0219 05:38:21.671291 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/82cb6684-3937-45f8-9f18-56940e88f480-memberlist\") pod \"speaker-87ct4\" (UID: \"82cb6684-3937-45f8-9f18-56940e88f480\") " pod="metallb-system/speaker-87ct4" Feb 19 05:38:21 crc kubenswrapper[5012]: I0219 05:38:21.903380 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-87ct4" Feb 19 05:38:21 crc kubenswrapper[5012]: W0219 05:38:21.944965 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82cb6684_3937_45f8_9f18_56940e88f480.slice/crio-62d5f1ffe62d16ec514817c37035e1cdf01e0e0c063a4174ba1e9ec6cde99c86 WatchSource:0}: Error finding container 62d5f1ffe62d16ec514817c37035e1cdf01e0e0c063a4174ba1e9ec6cde99c86: Status 404 returned error can't find the container with id 62d5f1ffe62d16ec514817c37035e1cdf01e0e0c063a4174ba1e9ec6cde99c86 Feb 19 05:38:22 crc kubenswrapper[5012]: I0219 05:38:22.474813 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-87ct4" event={"ID":"82cb6684-3937-45f8-9f18-56940e88f480","Type":"ContainerStarted","Data":"ef6385321a50dbc57d893ed85934d5e9ef181b7d5f0ccdf578715dca403f4b05"} Feb 19 05:38:22 crc kubenswrapper[5012]: I0219 05:38:22.475181 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-87ct4" event={"ID":"82cb6684-3937-45f8-9f18-56940e88f480","Type":"ContainerStarted","Data":"301c02fa8c0c52af6793aba7cbeb93211116a660f7266c7671cf8aa6806945a9"} Feb 19 05:38:22 crc kubenswrapper[5012]: I0219 05:38:22.475195 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-87ct4" event={"ID":"82cb6684-3937-45f8-9f18-56940e88f480","Type":"ContainerStarted","Data":"62d5f1ffe62d16ec514817c37035e1cdf01e0e0c063a4174ba1e9ec6cde99c86"} Feb 19 05:38:22 crc kubenswrapper[5012]: I0219 05:38:22.475613 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-87ct4" Feb 19 05:38:22 crc kubenswrapper[5012]: I0219 05:38:22.494190 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-87ct4" podStartSLOduration=5.494168013 podStartE2EDuration="5.494168013s" podCreationTimestamp="2026-02-19 05:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:38:22.492776819 +0000 UTC m=+798.526099388" watchObservedRunningTime="2026-02-19 05:38:22.494168013 +0000 UTC m=+798.527490582" Feb 19 05:38:26 crc kubenswrapper[5012]: I0219 05:38:26.512793 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" event={"ID":"431a9bf4-479e-4255-9664-554c80fa4376","Type":"ContainerStarted","Data":"52d5adf8a2b549a8d58613d7fa52bc091548b69823b737bae1e84a5ab8dc0e37"} Feb 19 05:38:26 crc kubenswrapper[5012]: I0219 05:38:26.513707 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" Feb 19 05:38:26 crc kubenswrapper[5012]: I0219 05:38:26.519356 5012 generic.go:334] "Generic (PLEG): container finished" podID="48b2548c-eb36-4c42-a84f-2d3f2084a46f" containerID="e82f60ebe1a7c9228a0dd9dfa0ba5e61c52b9b60d5402b45431873868ef774f5" exitCode=0 Feb 19 05:38:26 crc kubenswrapper[5012]: I0219 05:38:26.519417 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4d76m" event={"ID":"48b2548c-eb36-4c42-a84f-2d3f2084a46f","Type":"ContainerDied","Data":"e82f60ebe1a7c9228a0dd9dfa0ba5e61c52b9b60d5402b45431873868ef774f5"} Feb 19 05:38:26 crc kubenswrapper[5012]: I0219 05:38:26.533829 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" podStartSLOduration=1.748000115 podStartE2EDuration="9.533797687s" podCreationTimestamp="2026-02-19 05:38:17 +0000 UTC" firstStartedPulling="2026-02-19 05:38:18.471114995 +0000 UTC m=+794.504437564" lastFinishedPulling="2026-02-19 05:38:26.256912537 +0000 UTC m=+802.290235136" observedRunningTime="2026-02-19 05:38:26.53271129 +0000 UTC m=+802.566033889" watchObservedRunningTime="2026-02-19 05:38:26.533797687 +0000 UTC m=+802.567120296" Feb 19 05:38:27 crc kubenswrapper[5012]: I0219 05:38:27.531239 5012 generic.go:334] "Generic (PLEG): container finished" podID="48b2548c-eb36-4c42-a84f-2d3f2084a46f" containerID="5b2a2771f976c94f1be824c3868c214d5ed383407f23c4fbcea458e4fa09c2f0" exitCode=0 Feb 19 05:38:27 crc kubenswrapper[5012]: I0219 05:38:27.531365 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4d76m" event={"ID":"48b2548c-eb36-4c42-a84f-2d3f2084a46f","Type":"ContainerDied","Data":"5b2a2771f976c94f1be824c3868c214d5ed383407f23c4fbcea458e4fa09c2f0"} Feb 19 05:38:28 crc kubenswrapper[5012]: I0219 05:38:28.542867 5012 generic.go:334] "Generic (PLEG): container finished" podID="48b2548c-eb36-4c42-a84f-2d3f2084a46f" containerID="128cd9457413a787da8d23cd5c8a89e8704790be28bef11dd04f186d32cfb420" exitCode=0 Feb 19 05:38:28 crc kubenswrapper[5012]: I0219 05:38:28.543878 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4d76m" event={"ID":"48b2548c-eb36-4c42-a84f-2d3f2084a46f","Type":"ContainerDied","Data":"128cd9457413a787da8d23cd5c8a89e8704790be28bef11dd04f186d32cfb420"} Feb 19 05:38:29 crc kubenswrapper[5012]: I0219 05:38:29.558650 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4d76m" event={"ID":"48b2548c-eb36-4c42-a84f-2d3f2084a46f","Type":"ContainerStarted","Data":"74c0ed099b78d31089c47af49ea78f92b53b60561adfc44aa374f9cdb0f876c0"} Feb 19 05:38:29 crc kubenswrapper[5012]: I0219 05:38:29.558927 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4d76m" event={"ID":"48b2548c-eb36-4c42-a84f-2d3f2084a46f","Type":"ContainerStarted","Data":"9a9d867988893a25143807f3026e83a5aa4c9fbaeba526284c69f33633a86e39"} Feb 19 05:38:29 crc kubenswrapper[5012]: I0219 05:38:29.558938 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4d76m" event={"ID":"48b2548c-eb36-4c42-a84f-2d3f2084a46f","Type":"ContainerStarted","Data":"e066bfe7f144cb2fbbe7968e4e1dcd6d95af98f24db9002e64087029223e6f83"} Feb 19 05:38:29 crc kubenswrapper[5012]: I0219 05:38:29.558948 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4d76m" event={"ID":"48b2548c-eb36-4c42-a84f-2d3f2084a46f","Type":"ContainerStarted","Data":"7a9632ee62eb68c701e61b3a8978f8319de995e896a7aa477545f61b3b34d753"} Feb 19 05:38:29 crc kubenswrapper[5012]: I0219 05:38:29.558956 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4d76m" event={"ID":"48b2548c-eb36-4c42-a84f-2d3f2084a46f","Type":"ContainerStarted","Data":"10f3e58db8955afed9058e0af3b2a44a2ebd305a90eb000d3871104f0420fd86"} Feb 19 05:38:30 crc kubenswrapper[5012]: I0219 05:38:30.572521 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4d76m" event={"ID":"48b2548c-eb36-4c42-a84f-2d3f2084a46f","Type":"ContainerStarted","Data":"4afe7c497f913cd1fc74bdc2f49214c5b4fc3750cee7ceee45fae8c88e617c79"} Feb 19 05:38:30 crc kubenswrapper[5012]: I0219 05:38:30.572954 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:30 crc kubenswrapper[5012]: I0219 05:38:30.613272 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-4d76m" podStartSLOduration=6.393057364 podStartE2EDuration="13.613222908s" podCreationTimestamp="2026-02-19 05:38:17 +0000 UTC" firstStartedPulling="2026-02-19 05:38:19.008401843 +0000 UTC m=+795.041724452" lastFinishedPulling="2026-02-19 05:38:26.228567387 +0000 UTC m=+802.261889996" observedRunningTime="2026-02-19 05:38:30.603713897 +0000 UTC m=+806.637036506" watchObservedRunningTime="2026-02-19 05:38:30.613222908 +0000 UTC m=+806.646545517" Feb 19 05:38:33 crc kubenswrapper[5012]: I0219 05:38:33.824920 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:33 crc kubenswrapper[5012]: I0219 05:38:33.889289 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:38 crc kubenswrapper[5012]: I0219 05:38:38.238799 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-hdb84" Feb 19 05:38:38 crc kubenswrapper[5012]: I0219 05:38:38.830714 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4d76m" Feb 19 05:38:38 crc kubenswrapper[5012]: I0219 05:38:38.927929 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-c4jbq" Feb 19 05:38:40 crc kubenswrapper[5012]: I0219 05:38:40.813676 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rtnz8"] Feb 19 05:38:40 crc kubenswrapper[5012]: I0219 05:38:40.817200 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:40 crc kubenswrapper[5012]: I0219 05:38:40.837097 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtnz8"] Feb 19 05:38:40 crc kubenswrapper[5012]: I0219 05:38:40.969696 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27758\" (UniqueName: \"kubernetes.io/projected/2ad2fcc6-eb34-4443-b76a-08bb5891507f-kube-api-access-27758\") pod \"redhat-marketplace-rtnz8\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:40 crc kubenswrapper[5012]: I0219 05:38:40.969791 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-utilities\") pod \"redhat-marketplace-rtnz8\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:40 crc kubenswrapper[5012]: I0219 05:38:40.969827 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-catalog-content\") pod \"redhat-marketplace-rtnz8\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.070908 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-catalog-content\") pod \"redhat-marketplace-rtnz8\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.071114 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27758\" (UniqueName: \"kubernetes.io/projected/2ad2fcc6-eb34-4443-b76a-08bb5891507f-kube-api-access-27758\") pod \"redhat-marketplace-rtnz8\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.071685 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-utilities\") pod \"redhat-marketplace-rtnz8\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.071768 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-catalog-content\") pod \"redhat-marketplace-rtnz8\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.072238 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-utilities\") pod \"redhat-marketplace-rtnz8\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.106508 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27758\" (UniqueName: \"kubernetes.io/projected/2ad2fcc6-eb34-4443-b76a-08bb5891507f-kube-api-access-27758\") pod \"redhat-marketplace-rtnz8\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.176254 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.474953 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtnz8"] Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.667396 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtnz8" event={"ID":"2ad2fcc6-eb34-4443-b76a-08bb5891507f","Type":"ContainerStarted","Data":"9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990"} Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.667801 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtnz8" event={"ID":"2ad2fcc6-eb34-4443-b76a-08bb5891507f","Type":"ContainerStarted","Data":"f7a540a730396232505345a52eeca82daa986d46ed9d4f09815e20a2f47f7abf"} Feb 19 05:38:41 crc kubenswrapper[5012]: I0219 05:38:41.908941 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-87ct4" Feb 19 05:38:42 crc kubenswrapper[5012]: I0219 05:38:42.679947 5012 generic.go:334] "Generic (PLEG): container finished" podID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerID="9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990" exitCode=0 Feb 19 05:38:42 crc kubenswrapper[5012]: I0219 05:38:42.680042 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtnz8" event={"ID":"2ad2fcc6-eb34-4443-b76a-08bb5891507f","Type":"ContainerDied","Data":"9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990"} Feb 19 05:38:43 crc kubenswrapper[5012]: I0219 05:38:43.691511 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtnz8" event={"ID":"2ad2fcc6-eb34-4443-b76a-08bb5891507f","Type":"ContainerStarted","Data":"37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1"} Feb 19 05:38:44 crc kubenswrapper[5012]: I0219 05:38:44.704285 5012 generic.go:334] "Generic (PLEG): container finished" podID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerID="37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1" exitCode=0 Feb 19 05:38:44 crc kubenswrapper[5012]: I0219 05:38:44.726090 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtnz8" event={"ID":"2ad2fcc6-eb34-4443-b76a-08bb5891507f","Type":"ContainerDied","Data":"37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1"} Feb 19 05:38:45 crc kubenswrapper[5012]: I0219 05:38:45.713997 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtnz8" event={"ID":"2ad2fcc6-eb34-4443-b76a-08bb5891507f","Type":"ContainerStarted","Data":"8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee"} Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.382399 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rtnz8" podStartSLOduration=5.972075227 podStartE2EDuration="8.382373659s" podCreationTimestamp="2026-02-19 05:38:40 +0000 UTC" firstStartedPulling="2026-02-19 05:38:42.683062015 +0000 UTC m=+818.716384624" lastFinishedPulling="2026-02-19 05:38:45.093360477 +0000 UTC m=+821.126683056" observedRunningTime="2026-02-19 05:38:46.419051457 +0000 UTC m=+822.452374026" watchObservedRunningTime="2026-02-19 05:38:48.382373659 +0000 UTC m=+824.415696258" Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.386383 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cl447"] Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.387577 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cl447" Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.390847 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.391158 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.400054 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-rgf79" Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.401832 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cl447"] Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.549255 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vng4w\" (UniqueName: \"kubernetes.io/projected/797c14cf-1b4d-4b4e-9dc5-4843e2e77cef-kube-api-access-vng4w\") pod \"openstack-operator-index-cl447\" (UID: \"797c14cf-1b4d-4b4e-9dc5-4843e2e77cef\") " pod="openstack-operators/openstack-operator-index-cl447" Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.650408 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vng4w\" (UniqueName: \"kubernetes.io/projected/797c14cf-1b4d-4b4e-9dc5-4843e2e77cef-kube-api-access-vng4w\") pod \"openstack-operator-index-cl447\" (UID: \"797c14cf-1b4d-4b4e-9dc5-4843e2e77cef\") " pod="openstack-operators/openstack-operator-index-cl447" Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.692779 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vng4w\" (UniqueName: \"kubernetes.io/projected/797c14cf-1b4d-4b4e-9dc5-4843e2e77cef-kube-api-access-vng4w\") pod \"openstack-operator-index-cl447\" (UID: \"797c14cf-1b4d-4b4e-9dc5-4843e2e77cef\") " pod="openstack-operators/openstack-operator-index-cl447" Feb 19 05:38:48 crc kubenswrapper[5012]: I0219 05:38:48.729703 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cl447" Feb 19 05:38:49 crc kubenswrapper[5012]: I0219 05:38:49.019833 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cl447"] Feb 19 05:38:49 crc kubenswrapper[5012]: W0219 05:38:49.024161 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod797c14cf_1b4d_4b4e_9dc5_4843e2e77cef.slice/crio-ea4fb0159267403236388ac9641dfe7f09edddb7388bdf7ba5591f86c59338f0 WatchSource:0}: Error finding container ea4fb0159267403236388ac9641dfe7f09edddb7388bdf7ba5591f86c59338f0: Status 404 returned error can't find the container with id ea4fb0159267403236388ac9641dfe7f09edddb7388bdf7ba5591f86c59338f0 Feb 19 05:38:49 crc kubenswrapper[5012]: I0219 05:38:49.748804 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cl447" event={"ID":"797c14cf-1b4d-4b4e-9dc5-4843e2e77cef","Type":"ContainerStarted","Data":"ea4fb0159267403236388ac9641dfe7f09edddb7388bdf7ba5591f86c59338f0"} Feb 19 05:38:50 crc kubenswrapper[5012]: I0219 05:38:50.760396 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cl447" event={"ID":"797c14cf-1b4d-4b4e-9dc5-4843e2e77cef","Type":"ContainerStarted","Data":"25719e595a9d519d2b875b4a43941e7e665e4dc860031e497d4b63dad331962c"} Feb 19 05:38:50 crc kubenswrapper[5012]: I0219 05:38:50.798691 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cl447" podStartSLOduration=1.865319637 podStartE2EDuration="2.798661988s" podCreationTimestamp="2026-02-19 05:38:48 +0000 UTC" firstStartedPulling="2026-02-19 05:38:49.027927513 +0000 UTC m=+825.061250092" lastFinishedPulling="2026-02-19 05:38:49.961269834 +0000 UTC m=+825.994592443" observedRunningTime="2026-02-19 05:38:50.788826489 +0000 UTC m=+826.822149088" watchObservedRunningTime="2026-02-19 05:38:50.798661988 +0000 UTC m=+826.831984587" Feb 19 05:38:51 crc kubenswrapper[5012]: I0219 05:38:51.177030 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:51 crc kubenswrapper[5012]: I0219 05:38:51.177438 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:51 crc kubenswrapper[5012]: I0219 05:38:51.239166 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:51 crc kubenswrapper[5012]: I0219 05:38:51.841863 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:53 crc kubenswrapper[5012]: I0219 05:38:53.368883 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtnz8"] Feb 19 05:38:54 crc kubenswrapper[5012]: I0219 05:38:54.792648 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rtnz8" podUID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerName="registry-server" containerID="cri-o://8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee" gracePeriod=2 Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.262351 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.360953 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-catalog-content\") pod \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.361050 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27758\" (UniqueName: \"kubernetes.io/projected/2ad2fcc6-eb34-4443-b76a-08bb5891507f-kube-api-access-27758\") pod \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.361216 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-utilities\") pod \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\" (UID: \"2ad2fcc6-eb34-4443-b76a-08bb5891507f\") " Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.362836 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-utilities" (OuterVolumeSpecName: "utilities") pod "2ad2fcc6-eb34-4443-b76a-08bb5891507f" (UID: "2ad2fcc6-eb34-4443-b76a-08bb5891507f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.371921 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ad2fcc6-eb34-4443-b76a-08bb5891507f-kube-api-access-27758" (OuterVolumeSpecName: "kube-api-access-27758") pod "2ad2fcc6-eb34-4443-b76a-08bb5891507f" (UID: "2ad2fcc6-eb34-4443-b76a-08bb5891507f"). InnerVolumeSpecName "kube-api-access-27758". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.397203 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ad2fcc6-eb34-4443-b76a-08bb5891507f" (UID: "2ad2fcc6-eb34-4443-b76a-08bb5891507f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.463449 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.463499 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad2fcc6-eb34-4443-b76a-08bb5891507f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.463524 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27758\" (UniqueName: \"kubernetes.io/projected/2ad2fcc6-eb34-4443-b76a-08bb5891507f-kube-api-access-27758\") on node \"crc\" DevicePath \"\"" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.804937 5012 generic.go:334] "Generic (PLEG): container finished" podID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerID="8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee" exitCode=0 Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.804983 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtnz8" event={"ID":"2ad2fcc6-eb34-4443-b76a-08bb5891507f","Type":"ContainerDied","Data":"8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee"} Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.805038 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rtnz8" event={"ID":"2ad2fcc6-eb34-4443-b76a-08bb5891507f","Type":"ContainerDied","Data":"f7a540a730396232505345a52eeca82daa986d46ed9d4f09815e20a2f47f7abf"} Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.805051 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rtnz8" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.805062 5012 scope.go:117] "RemoveContainer" containerID="8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.830341 5012 scope.go:117] "RemoveContainer" containerID="37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.860611 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtnz8"] Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.868112 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rtnz8"] Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.869472 5012 scope.go:117] "RemoveContainer" containerID="9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.894944 5012 scope.go:117] "RemoveContainer" containerID="8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee" Feb 19 05:38:55 crc kubenswrapper[5012]: E0219 05:38:55.895567 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee\": container with ID starting with 8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee not found: ID does not exist" containerID="8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.895621 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee"} err="failed to get container status \"8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee\": rpc error: code = NotFound desc = could not find container \"8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee\": container with ID starting with 8c27454fe33b9eb47260e8f62b5567ee745d912e09b996f57106862f63e33eee not found: ID does not exist" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.895657 5012 scope.go:117] "RemoveContainer" containerID="37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1" Feb 19 05:38:55 crc kubenswrapper[5012]: E0219 05:38:55.896102 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1\": container with ID starting with 37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1 not found: ID does not exist" containerID="37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.896202 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1"} err="failed to get container status \"37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1\": rpc error: code = NotFound desc = could not find container \"37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1\": container with ID starting with 37d2112acacb5f9e5e914b68773f1d2d3f2e1a0e87cb307c406f5e236984ddd1 not found: ID does not exist" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.896240 5012 scope.go:117] "RemoveContainer" containerID="9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990" Feb 19 05:38:55 crc kubenswrapper[5012]: E0219 05:38:55.896782 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990\": container with ID starting with 9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990 not found: ID does not exist" containerID="9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990" Feb 19 05:38:55 crc kubenswrapper[5012]: I0219 05:38:55.896826 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990"} err="failed to get container status \"9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990\": rpc error: code = NotFound desc = could not find container \"9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990\": container with ID starting with 9a1eb833cf6f8ee54063ab87b94d75c19d73aa247a0b85930aea493174eef990 not found: ID does not exist" Feb 19 05:38:56 crc kubenswrapper[5012]: I0219 05:38:56.715355 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" path="/var/lib/kubelet/pods/2ad2fcc6-eb34-4443-b76a-08bb5891507f/volumes" Feb 19 05:38:58 crc kubenswrapper[5012]: I0219 05:38:58.731747 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-cl447" Feb 19 05:38:58 crc kubenswrapper[5012]: I0219 05:38:58.732190 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cl447" Feb 19 05:38:58 crc kubenswrapper[5012]: I0219 05:38:58.774683 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-cl447" Feb 19 05:38:58 crc kubenswrapper[5012]: I0219 05:38:58.878965 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-cl447" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.631988 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q"] Feb 19 05:39:01 crc kubenswrapper[5012]: E0219 05:39:01.632396 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerName="extract-content" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.632443 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerName="extract-content" Feb 19 05:39:01 crc kubenswrapper[5012]: E0219 05:39:01.632459 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerName="extract-utilities" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.632472 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerName="extract-utilities" Feb 19 05:39:01 crc kubenswrapper[5012]: E0219 05:39:01.632504 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerName="registry-server" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.632517 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerName="registry-server" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.632731 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ad2fcc6-eb34-4443-b76a-08bb5891507f" containerName="registry-server" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.634234 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.639196 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-gxsjj" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.651005 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q"] Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.777152 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.777268 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.777463 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qtns\" (UniqueName: \"kubernetes.io/projected/59bb7d65-7d8f-487c-b586-7cd4be8eab12-kube-api-access-7qtns\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.878642 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.878781 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qtns\" (UniqueName: \"kubernetes.io/projected/59bb7d65-7d8f-487c-b586-7cd4be8eab12-kube-api-access-7qtns\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.878883 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.879714 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-util\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.879778 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-bundle\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.918601 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qtns\" (UniqueName: \"kubernetes.io/projected/59bb7d65-7d8f-487c-b586-7cd4be8eab12-kube-api-access-7qtns\") pod \"8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:01 crc kubenswrapper[5012]: I0219 05:39:01.970367 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:02 crc kubenswrapper[5012]: I0219 05:39:02.469282 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q"] Feb 19 05:39:02 crc kubenswrapper[5012]: W0219 05:39:02.479428 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59bb7d65_7d8f_487c_b586_7cd4be8eab12.slice/crio-40a0c1f8cde09c043d274a91e90af16c7f1474f53bf23f231af6a4275f4ccc9e WatchSource:0}: Error finding container 40a0c1f8cde09c043d274a91e90af16c7f1474f53bf23f231af6a4275f4ccc9e: Status 404 returned error can't find the container with id 40a0c1f8cde09c043d274a91e90af16c7f1474f53bf23f231af6a4275f4ccc9e Feb 19 05:39:02 crc kubenswrapper[5012]: I0219 05:39:02.875557 5012 generic.go:334] "Generic (PLEG): container finished" podID="59bb7d65-7d8f-487c-b586-7cd4be8eab12" containerID="754d7611c5f9ed36a19fe10c3aa0b56c3acd6d75c5d1c539a226d17d0986358d" exitCode=0 Feb 19 05:39:02 crc kubenswrapper[5012]: I0219 05:39:02.875617 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" event={"ID":"59bb7d65-7d8f-487c-b586-7cd4be8eab12","Type":"ContainerDied","Data":"754d7611c5f9ed36a19fe10c3aa0b56c3acd6d75c5d1c539a226d17d0986358d"} Feb 19 05:39:02 crc kubenswrapper[5012]: I0219 05:39:02.877027 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" event={"ID":"59bb7d65-7d8f-487c-b586-7cd4be8eab12","Type":"ContainerStarted","Data":"40a0c1f8cde09c043d274a91e90af16c7f1474f53bf23f231af6a4275f4ccc9e"} Feb 19 05:39:03 crc kubenswrapper[5012]: I0219 05:39:03.888402 5012 generic.go:334] "Generic (PLEG): container finished" podID="59bb7d65-7d8f-487c-b586-7cd4be8eab12" containerID="7abefeba9ce892cb36dc582096096a2870b6d5345619dcb874b129d34ff33c4f" exitCode=0 Feb 19 05:39:03 crc kubenswrapper[5012]: I0219 05:39:03.888464 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" event={"ID":"59bb7d65-7d8f-487c-b586-7cd4be8eab12","Type":"ContainerDied","Data":"7abefeba9ce892cb36dc582096096a2870b6d5345619dcb874b129d34ff33c4f"} Feb 19 05:39:04 crc kubenswrapper[5012]: I0219 05:39:04.901258 5012 generic.go:334] "Generic (PLEG): container finished" podID="59bb7d65-7d8f-487c-b586-7cd4be8eab12" containerID="c1232041e5886d1fb567c5bd5a603a4dc061059e78ff14b1132df2e546ac4bdc" exitCode=0 Feb 19 05:39:04 crc kubenswrapper[5012]: I0219 05:39:04.901347 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" event={"ID":"59bb7d65-7d8f-487c-b586-7cd4be8eab12","Type":"ContainerDied","Data":"c1232041e5886d1fb567c5bd5a603a4dc061059e78ff14b1132df2e546ac4bdc"} Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.275720 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.453910 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qtns\" (UniqueName: \"kubernetes.io/projected/59bb7d65-7d8f-487c-b586-7cd4be8eab12-kube-api-access-7qtns\") pod \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.454001 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-util\") pod \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.454088 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-bundle\") pod \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\" (UID: \"59bb7d65-7d8f-487c-b586-7cd4be8eab12\") " Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.455182 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-bundle" (OuterVolumeSpecName: "bundle") pod "59bb7d65-7d8f-487c-b586-7cd4be8eab12" (UID: "59bb7d65-7d8f-487c-b586-7cd4be8eab12"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.463395 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59bb7d65-7d8f-487c-b586-7cd4be8eab12-kube-api-access-7qtns" (OuterVolumeSpecName: "kube-api-access-7qtns") pod "59bb7d65-7d8f-487c-b586-7cd4be8eab12" (UID: "59bb7d65-7d8f-487c-b586-7cd4be8eab12"). InnerVolumeSpecName "kube-api-access-7qtns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.483165 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-util" (OuterVolumeSpecName: "util") pod "59bb7d65-7d8f-487c-b586-7cd4be8eab12" (UID: "59bb7d65-7d8f-487c-b586-7cd4be8eab12"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.556599 5012 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.556646 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qtns\" (UniqueName: \"kubernetes.io/projected/59bb7d65-7d8f-487c-b586-7cd4be8eab12-kube-api-access-7qtns\") on node \"crc\" DevicePath \"\"" Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.556668 5012 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59bb7d65-7d8f-487c-b586-7cd4be8eab12-util\") on node \"crc\" DevicePath \"\"" Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.922798 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" event={"ID":"59bb7d65-7d8f-487c-b586-7cd4be8eab12","Type":"ContainerDied","Data":"40a0c1f8cde09c043d274a91e90af16c7f1474f53bf23f231af6a4275f4ccc9e"} Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.922872 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40a0c1f8cde09c043d274a91e90af16c7f1474f53bf23f231af6a4275f4ccc9e" Feb 19 05:39:06 crc kubenswrapper[5012]: I0219 05:39:06.923222 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q" Feb 19 05:39:11 crc kubenswrapper[5012]: I0219 05:39:11.921277 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk"] Feb 19 05:39:11 crc kubenswrapper[5012]: E0219 05:39:11.922052 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59bb7d65-7d8f-487c-b586-7cd4be8eab12" containerName="util" Feb 19 05:39:11 crc kubenswrapper[5012]: I0219 05:39:11.922067 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="59bb7d65-7d8f-487c-b586-7cd4be8eab12" containerName="util" Feb 19 05:39:11 crc kubenswrapper[5012]: E0219 05:39:11.922087 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59bb7d65-7d8f-487c-b586-7cd4be8eab12" containerName="extract" Feb 19 05:39:11 crc kubenswrapper[5012]: I0219 05:39:11.922095 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="59bb7d65-7d8f-487c-b586-7cd4be8eab12" containerName="extract" Feb 19 05:39:11 crc kubenswrapper[5012]: E0219 05:39:11.922112 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59bb7d65-7d8f-487c-b586-7cd4be8eab12" containerName="pull" Feb 19 05:39:11 crc kubenswrapper[5012]: I0219 05:39:11.922121 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="59bb7d65-7d8f-487c-b586-7cd4be8eab12" containerName="pull" Feb 19 05:39:11 crc kubenswrapper[5012]: I0219 05:39:11.922257 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="59bb7d65-7d8f-487c-b586-7cd4be8eab12" containerName="extract" Feb 19 05:39:11 crc kubenswrapper[5012]: I0219 05:39:11.922788 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk" Feb 19 05:39:11 crc kubenswrapper[5012]: I0219 05:39:11.924842 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-w5pqc" Feb 19 05:39:11 crc kubenswrapper[5012]: I0219 05:39:11.950025 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk"] Feb 19 05:39:11 crc kubenswrapper[5012]: I0219 05:39:11.958946 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98jdp\" (UniqueName: \"kubernetes.io/projected/76b34ac4-96f1-4bbc-9969-eb3e1cfc2159-kube-api-access-98jdp\") pod \"openstack-operator-controller-init-6679bf9b57-q57bk\" (UID: \"76b34ac4-96f1-4bbc-9969-eb3e1cfc2159\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk" Feb 19 05:39:12 crc kubenswrapper[5012]: I0219 05:39:12.060385 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98jdp\" (UniqueName: \"kubernetes.io/projected/76b34ac4-96f1-4bbc-9969-eb3e1cfc2159-kube-api-access-98jdp\") pod \"openstack-operator-controller-init-6679bf9b57-q57bk\" (UID: \"76b34ac4-96f1-4bbc-9969-eb3e1cfc2159\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk" Feb 19 05:39:12 crc kubenswrapper[5012]: I0219 05:39:12.095852 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98jdp\" (UniqueName: \"kubernetes.io/projected/76b34ac4-96f1-4bbc-9969-eb3e1cfc2159-kube-api-access-98jdp\") pod \"openstack-operator-controller-init-6679bf9b57-q57bk\" (UID: \"76b34ac4-96f1-4bbc-9969-eb3e1cfc2159\") " pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk" Feb 19 05:39:12 crc kubenswrapper[5012]: I0219 05:39:12.244059 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk" Feb 19 05:39:12 crc kubenswrapper[5012]: I0219 05:39:12.591288 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk"] Feb 19 05:39:12 crc kubenswrapper[5012]: I0219 05:39:12.971952 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk" event={"ID":"76b34ac4-96f1-4bbc-9969-eb3e1cfc2159","Type":"ContainerStarted","Data":"be0cf90e7840d58f063834a172e481be88cde3d92cb7d50cf620c7fd753dc6bb"} Feb 19 05:39:18 crc kubenswrapper[5012]: I0219 05:39:18.014582 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk" event={"ID":"76b34ac4-96f1-4bbc-9969-eb3e1cfc2159","Type":"ContainerStarted","Data":"15c525f23e864e23a6f6f84b762d46a4f648932d213342dbd7d85697814c187f"} Feb 19 05:39:18 crc kubenswrapper[5012]: I0219 05:39:18.015100 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk" Feb 19 05:39:18 crc kubenswrapper[5012]: I0219 05:39:18.064412 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk" podStartSLOduration=2.6056852619999997 podStartE2EDuration="7.064385836s" podCreationTimestamp="2026-02-19 05:39:11 +0000 UTC" firstStartedPulling="2026-02-19 05:39:12.609935823 +0000 UTC m=+848.643258412" lastFinishedPulling="2026-02-19 05:39:17.068636417 +0000 UTC m=+853.101958986" observedRunningTime="2026-02-19 05:39:18.059127518 +0000 UTC m=+854.092450117" watchObservedRunningTime="2026-02-19 05:39:18.064385836 +0000 UTC m=+854.097708445" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.199549 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5vmsf"] Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.202221 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.212395 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5vmsf"] Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.303072 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbr24\" (UniqueName: \"kubernetes.io/projected/88253e52-7e63-4042-8eee-d414c388e9c8-kube-api-access-lbr24\") pod \"certified-operators-5vmsf\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.303452 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-utilities\") pod \"certified-operators-5vmsf\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.303665 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-catalog-content\") pod \"certified-operators-5vmsf\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.404704 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-utilities\") pod \"certified-operators-5vmsf\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.404828 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-catalog-content\") pod \"certified-operators-5vmsf\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.404871 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbr24\" (UniqueName: \"kubernetes.io/projected/88253e52-7e63-4042-8eee-d414c388e9c8-kube-api-access-lbr24\") pod \"certified-operators-5vmsf\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.406257 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-catalog-content\") pod \"certified-operators-5vmsf\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.406344 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-utilities\") pod \"certified-operators-5vmsf\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.449325 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbr24\" (UniqueName: \"kubernetes.io/projected/88253e52-7e63-4042-8eee-d414c388e9c8-kube-api-access-lbr24\") pod \"certified-operators-5vmsf\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.537163 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:20 crc kubenswrapper[5012]: I0219 05:39:20.753263 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5vmsf"] Feb 19 05:39:21 crc kubenswrapper[5012]: I0219 05:39:21.048585 5012 generic.go:334] "Generic (PLEG): container finished" podID="88253e52-7e63-4042-8eee-d414c388e9c8" containerID="ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a" exitCode=0 Feb 19 05:39:21 crc kubenswrapper[5012]: I0219 05:39:21.048979 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vmsf" event={"ID":"88253e52-7e63-4042-8eee-d414c388e9c8","Type":"ContainerDied","Data":"ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a"} Feb 19 05:39:21 crc kubenswrapper[5012]: I0219 05:39:21.049019 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vmsf" event={"ID":"88253e52-7e63-4042-8eee-d414c388e9c8","Type":"ContainerStarted","Data":"e81e24ff31c9e0fe526a55507b5fb0fce47a5fce655d3f1a64e54d56ef44547f"} Feb 19 05:39:22 crc kubenswrapper[5012]: I0219 05:39:22.247739 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6679bf9b57-q57bk" Feb 19 05:39:23 crc kubenswrapper[5012]: I0219 05:39:23.072586 5012 generic.go:334] "Generic (PLEG): container finished" podID="88253e52-7e63-4042-8eee-d414c388e9c8" containerID="584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f" exitCode=0 Feb 19 05:39:23 crc kubenswrapper[5012]: I0219 05:39:23.072633 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vmsf" event={"ID":"88253e52-7e63-4042-8eee-d414c388e9c8","Type":"ContainerDied","Data":"584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f"} Feb 19 05:39:24 crc kubenswrapper[5012]: I0219 05:39:24.100865 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vmsf" event={"ID":"88253e52-7e63-4042-8eee-d414c388e9c8","Type":"ContainerStarted","Data":"c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3"} Feb 19 05:39:24 crc kubenswrapper[5012]: I0219 05:39:24.135166 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5vmsf" podStartSLOduration=1.7679633479999999 podStartE2EDuration="4.13514407s" podCreationTimestamp="2026-02-19 05:39:20 +0000 UTC" firstStartedPulling="2026-02-19 05:39:21.050714839 +0000 UTC m=+857.084037448" lastFinishedPulling="2026-02-19 05:39:23.417895611 +0000 UTC m=+859.451218170" observedRunningTime="2026-02-19 05:39:24.13020171 +0000 UTC m=+860.163524319" watchObservedRunningTime="2026-02-19 05:39:24.13514407 +0000 UTC m=+860.168466649" Feb 19 05:39:30 crc kubenswrapper[5012]: I0219 05:39:30.537930 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:30 crc kubenswrapper[5012]: I0219 05:39:30.538341 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:30 crc kubenswrapper[5012]: I0219 05:39:30.589553 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:31 crc kubenswrapper[5012]: I0219 05:39:31.208281 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:31 crc kubenswrapper[5012]: I0219 05:39:31.280469 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5vmsf"] Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.169590 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5vmsf" podUID="88253e52-7e63-4042-8eee-d414c388e9c8" containerName="registry-server" containerID="cri-o://c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3" gracePeriod=2 Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.646914 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.696088 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-utilities\") pod \"88253e52-7e63-4042-8eee-d414c388e9c8\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.696173 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbr24\" (UniqueName: \"kubernetes.io/projected/88253e52-7e63-4042-8eee-d414c388e9c8-kube-api-access-lbr24\") pod \"88253e52-7e63-4042-8eee-d414c388e9c8\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.696197 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-catalog-content\") pod \"88253e52-7e63-4042-8eee-d414c388e9c8\" (UID: \"88253e52-7e63-4042-8eee-d414c388e9c8\") " Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.697098 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-utilities" (OuterVolumeSpecName: "utilities") pod "88253e52-7e63-4042-8eee-d414c388e9c8" (UID: "88253e52-7e63-4042-8eee-d414c388e9c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.700763 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88253e52-7e63-4042-8eee-d414c388e9c8-kube-api-access-lbr24" (OuterVolumeSpecName: "kube-api-access-lbr24") pod "88253e52-7e63-4042-8eee-d414c388e9c8" (UID: "88253e52-7e63-4042-8eee-d414c388e9c8"). InnerVolumeSpecName "kube-api-access-lbr24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.759920 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88253e52-7e63-4042-8eee-d414c388e9c8" (UID: "88253e52-7e63-4042-8eee-d414c388e9c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.798032 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.798063 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbr24\" (UniqueName: \"kubernetes.io/projected/88253e52-7e63-4042-8eee-d414c388e9c8-kube-api-access-lbr24\") on node \"crc\" DevicePath \"\"" Feb 19 05:39:33 crc kubenswrapper[5012]: I0219 05:39:33.798074 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88253e52-7e63-4042-8eee-d414c388e9c8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.177103 5012 generic.go:334] "Generic (PLEG): container finished" podID="88253e52-7e63-4042-8eee-d414c388e9c8" containerID="c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3" exitCode=0 Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.177145 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vmsf" event={"ID":"88253e52-7e63-4042-8eee-d414c388e9c8","Type":"ContainerDied","Data":"c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3"} Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.177191 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vmsf" event={"ID":"88253e52-7e63-4042-8eee-d414c388e9c8","Type":"ContainerDied","Data":"e81e24ff31c9e0fe526a55507b5fb0fce47a5fce655d3f1a64e54d56ef44547f"} Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.177188 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vmsf" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.177212 5012 scope.go:117] "RemoveContainer" containerID="c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.195617 5012 scope.go:117] "RemoveContainer" containerID="584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.225760 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5vmsf"] Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.229127 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5vmsf"] Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.229411 5012 scope.go:117] "RemoveContainer" containerID="ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.264127 5012 scope.go:117] "RemoveContainer" containerID="c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3" Feb 19 05:39:34 crc kubenswrapper[5012]: E0219 05:39:34.264572 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3\": container with ID starting with c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3 not found: ID does not exist" containerID="c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.264611 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3"} err="failed to get container status \"c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3\": rpc error: code = NotFound desc = could not find container \"c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3\": container with ID starting with c2ac680d1f730503898dab4c728c409a2472a36c60540eb45492e92b04f7eeb3 not found: ID does not exist" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.264635 5012 scope.go:117] "RemoveContainer" containerID="584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f" Feb 19 05:39:34 crc kubenswrapper[5012]: E0219 05:39:34.268573 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f\": container with ID starting with 584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f not found: ID does not exist" containerID="584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.268606 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f"} err="failed to get container status \"584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f\": rpc error: code = NotFound desc = could not find container \"584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f\": container with ID starting with 584a040965637562ed494787a819be5cf6dbf5f5a297653c6925d0a289a65f2f not found: ID does not exist" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.268628 5012 scope.go:117] "RemoveContainer" containerID="ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a" Feb 19 05:39:34 crc kubenswrapper[5012]: E0219 05:39:34.272599 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a\": container with ID starting with ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a not found: ID does not exist" containerID="ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.272625 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a"} err="failed to get container status \"ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a\": rpc error: code = NotFound desc = could not find container \"ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a\": container with ID starting with ac80d1d1688325018975f4c8581a47ee7babfd3de76c0c30477fd02ac76d027a not found: ID does not exist" Feb 19 05:39:34 crc kubenswrapper[5012]: I0219 05:39:34.714833 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88253e52-7e63-4042-8eee-d414c388e9c8" path="/var/lib/kubelet/pods/88253e52-7e63-4042-8eee-d414c388e9c8/volumes" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.366417 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n"] Feb 19 05:39:43 crc kubenswrapper[5012]: E0219 05:39:43.367060 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88253e52-7e63-4042-8eee-d414c388e9c8" containerName="registry-server" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.367072 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="88253e52-7e63-4042-8eee-d414c388e9c8" containerName="registry-server" Feb 19 05:39:43 crc kubenswrapper[5012]: E0219 05:39:43.367088 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88253e52-7e63-4042-8eee-d414c388e9c8" containerName="extract-utilities" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.367093 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="88253e52-7e63-4042-8eee-d414c388e9c8" containerName="extract-utilities" Feb 19 05:39:43 crc kubenswrapper[5012]: E0219 05:39:43.367103 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88253e52-7e63-4042-8eee-d414c388e9c8" containerName="extract-content" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.367111 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="88253e52-7e63-4042-8eee-d414c388e9c8" containerName="extract-content" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.367219 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="88253e52-7e63-4042-8eee-d414c388e9c8" containerName="registry-server" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.367602 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.370011 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-5fbns" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.373980 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.383968 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.384842 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.386634 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-fsczk" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.401823 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.424371 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.425194 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.428204 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.428653 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-c2dnh" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.429046 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.433374 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.433493 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-nbvp5" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.450558 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.454559 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.455289 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.458585 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.459180 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.459753 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-7dgvb" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.461729 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-llgkh" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.461919 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.469619 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.470389 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.471539 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-jwdvc" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.481609 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.499170 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.510099 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.522990 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.523752 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.529463 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.531617 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-dcgvb" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.538109 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.538908 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.544392 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.545325 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.545825 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-2b6nl" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.546638 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpjpx\" (UniqueName: \"kubernetes.io/projected/0cc1b41b-fbf6-4d0c-b721-dcad09c03feb-kube-api-access-vpjpx\") pod \"barbican-operator-controller-manager-868647ff47-xzk2n\" (UID: \"0cc1b41b-fbf6-4d0c-b721-dcad09c03feb\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.546680 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrp6c\" (UniqueName: \"kubernetes.io/projected/11d49fcd-6e31-47e5-84a1-c6ae972e13cb-kube-api-access-jrp6c\") pod \"designate-operator-controller-manager-6d8bf5c495-kt4nw\" (UID: \"11d49fcd-6e31-47e5-84a1-c6ae972e13cb\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.546713 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k272\" (UniqueName: \"kubernetes.io/projected/8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be-kube-api-access-7k272\") pod \"glance-operator-controller-manager-77987464f4-qzq7x\" (UID: \"8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.546755 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf2sz\" (UniqueName: \"kubernetes.io/projected/8af03a54-ad7a-4684-b5a6-ba83f410e6ed-kube-api-access-kf2sz\") pod \"cinder-operator-controller-manager-5d946d989d-556xv\" (UID: \"8af03a54-ad7a-4684-b5a6-ba83f410e6ed\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.548497 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-lth8m" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.557372 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.563041 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.578366 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.579283 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.585337 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.586267 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.588699 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-2v7sl" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.617226 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.627952 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-zqw88" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649054 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpjpx\" (UniqueName: \"kubernetes.io/projected/0cc1b41b-fbf6-4d0c-b721-dcad09c03feb-kube-api-access-vpjpx\") pod \"barbican-operator-controller-manager-868647ff47-xzk2n\" (UID: \"0cc1b41b-fbf6-4d0c-b721-dcad09c03feb\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649134 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwrcl\" (UniqueName: \"kubernetes.io/projected/4f281b5b-b656-4d4a-b628-d4bfe4fc94f9-kube-api-access-wwrcl\") pod \"horizon-operator-controller-manager-5b9b8895d5-csct6\" (UID: \"4f281b5b-b656-4d4a-b628-d4bfe4fc94f9\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649172 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7rs7\" (UniqueName: \"kubernetes.io/projected/dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43-kube-api-access-r7rs7\") pod \"keystone-operator-controller-manager-b4d948c87-9zkvx\" (UID: \"dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649209 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrp6c\" (UniqueName: \"kubernetes.io/projected/11d49fcd-6e31-47e5-84a1-c6ae972e13cb-kube-api-access-jrp6c\") pod \"designate-operator-controller-manager-6d8bf5c495-kt4nw\" (UID: \"11d49fcd-6e31-47e5-84a1-c6ae972e13cb\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649243 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8t8k\" (UniqueName: \"kubernetes.io/projected/e9e07b56-2724-4046-8a60-81b751fb0588-kube-api-access-z8t8k\") pod \"manila-operator-controller-manager-54f6768c69-ldrx5\" (UID: \"e9e07b56-2724-4046-8a60-81b751fb0588\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649273 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5jhp\" (UniqueName: \"kubernetes.io/projected/996bfd61-486b-432d-9e09-d3a90ff9124c-kube-api-access-h5jhp\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649316 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jcl5\" (UniqueName: \"kubernetes.io/projected/8629b5e4-e6a8-4c47-b76b-f58a26b42912-kube-api-access-6jcl5\") pod \"ironic-operator-controller-manager-554564d7fc-dgldv\" (UID: \"8629b5e4-e6a8-4c47-b76b-f58a26b42912\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649347 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k272\" (UniqueName: \"kubernetes.io/projected/8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be-kube-api-access-7k272\") pod \"glance-operator-controller-manager-77987464f4-qzq7x\" (UID: \"8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649371 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gjkq\" (UniqueName: \"kubernetes.io/projected/bfca307c-9b00-4c12-bdd6-a394b7cc7cfd-kube-api-access-7gjkq\") pod \"heat-operator-controller-manager-69f49c598c-5szxp\" (UID: \"bfca307c-9b00-4c12-bdd6-a394b7cc7cfd\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649420 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf2sz\" (UniqueName: \"kubernetes.io/projected/8af03a54-ad7a-4684-b5a6-ba83f410e6ed-kube-api-access-kf2sz\") pod \"cinder-operator-controller-manager-5d946d989d-556xv\" (UID: \"8af03a54-ad7a-4684-b5a6-ba83f410e6ed\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.649455 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.670373 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.693337 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpjpx\" (UniqueName: \"kubernetes.io/projected/0cc1b41b-fbf6-4d0c-b721-dcad09c03feb-kube-api-access-vpjpx\") pod \"barbican-operator-controller-manager-868647ff47-xzk2n\" (UID: \"0cc1b41b-fbf6-4d0c-b721-dcad09c03feb\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.693398 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.694370 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.694433 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf2sz\" (UniqueName: \"kubernetes.io/projected/8af03a54-ad7a-4684-b5a6-ba83f410e6ed-kube-api-access-kf2sz\") pod \"cinder-operator-controller-manager-5d946d989d-556xv\" (UID: \"8af03a54-ad7a-4684-b5a6-ba83f410e6ed\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.696542 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-kmhh8" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.700203 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrp6c\" (UniqueName: \"kubernetes.io/projected/11d49fcd-6e31-47e5-84a1-c6ae972e13cb-kube-api-access-jrp6c\") pod \"designate-operator-controller-manager-6d8bf5c495-kt4nw\" (UID: \"11d49fcd-6e31-47e5-84a1-c6ae972e13cb\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.705804 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.709062 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.709961 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.711245 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k272\" (UniqueName: \"kubernetes.io/projected/8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be-kube-api-access-7k272\") pod \"glance-operator-controller-manager-77987464f4-qzq7x\" (UID: \"8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.714584 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-txkm9" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.736102 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.746851 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.747372 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.748933 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.749715 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.751196 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp425\" (UniqueName: \"kubernetes.io/projected/1e872b11-03d6-4d3f-8e06-e10e1e73d917-kube-api-access-lp425\") pod \"mariadb-operator-controller-manager-6994f66f48-rpbt8\" (UID: \"1e872b11-03d6-4d3f-8e06-e10e1e73d917\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.751248 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.751281 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwrcl\" (UniqueName: \"kubernetes.io/projected/4f281b5b-b656-4d4a-b628-d4bfe4fc94f9-kube-api-access-wwrcl\") pod \"horizon-operator-controller-manager-5b9b8895d5-csct6\" (UID: \"4f281b5b-b656-4d4a-b628-d4bfe4fc94f9\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.751323 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7rs7\" (UniqueName: \"kubernetes.io/projected/dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43-kube-api-access-r7rs7\") pod \"keystone-operator-controller-manager-b4d948c87-9zkvx\" (UID: \"dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.751360 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8t8k\" (UniqueName: \"kubernetes.io/projected/e9e07b56-2724-4046-8a60-81b751fb0588-kube-api-access-z8t8k\") pod \"manila-operator-controller-manager-54f6768c69-ldrx5\" (UID: \"e9e07b56-2724-4046-8a60-81b751fb0588\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.751381 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5jhp\" (UniqueName: \"kubernetes.io/projected/996bfd61-486b-432d-9e09-d3a90ff9124c-kube-api-access-h5jhp\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.751399 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jcl5\" (UniqueName: \"kubernetes.io/projected/8629b5e4-e6a8-4c47-b76b-f58a26b42912-kube-api-access-6jcl5\") pod \"ironic-operator-controller-manager-554564d7fc-dgldv\" (UID: \"8629b5e4-e6a8-4c47-b76b-f58a26b42912\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.751418 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrjml\" (UniqueName: \"kubernetes.io/projected/b123191d-e55b-4ddc-90ea-abcb34c97be2-kube-api-access-vrjml\") pod \"neutron-operator-controller-manager-64ddbf8bb-27hfc\" (UID: \"b123191d-e55b-4ddc-90ea-abcb34c97be2\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.751441 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gjkq\" (UniqueName: \"kubernetes.io/projected/bfca307c-9b00-4c12-bdd6-a394b7cc7cfd-kube-api-access-7gjkq\") pod \"heat-operator-controller-manager-69f49c598c-5szxp\" (UID: \"bfca307c-9b00-4c12-bdd6-a394b7cc7cfd\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp" Feb 19 05:39:43 crc kubenswrapper[5012]: E0219 05:39:43.751716 5012 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:43 crc kubenswrapper[5012]: E0219 05:39:43.751757 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert podName:996bfd61-486b-432d-9e09-d3a90ff9124c nodeName:}" failed. No retries permitted until 2026-02-19 05:39:44.25174417 +0000 UTC m=+880.285066739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert") pod "infra-operator-controller-manager-79d975b745-cp8kx" (UID: "996bfd61-486b-432d-9e09-d3a90ff9124c") : secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.752450 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-zmpvr" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.754140 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.755144 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.756615 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.759363 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-dfvzm" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.759855 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.777571 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7rs7\" (UniqueName: \"kubernetes.io/projected/dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43-kube-api-access-r7rs7\") pod \"keystone-operator-controller-manager-b4d948c87-9zkvx\" (UID: \"dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.778865 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gjkq\" (UniqueName: \"kubernetes.io/projected/bfca307c-9b00-4c12-bdd6-a394b7cc7cfd-kube-api-access-7gjkq\") pod \"heat-operator-controller-manager-69f49c598c-5szxp\" (UID: \"bfca307c-9b00-4c12-bdd6-a394b7cc7cfd\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.779659 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwrcl\" (UniqueName: \"kubernetes.io/projected/4f281b5b-b656-4d4a-b628-d4bfe4fc94f9-kube-api-access-wwrcl\") pod \"horizon-operator-controller-manager-5b9b8895d5-csct6\" (UID: \"4f281b5b-b656-4d4a-b628-d4bfe4fc94f9\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.779778 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jcl5\" (UniqueName: \"kubernetes.io/projected/8629b5e4-e6a8-4c47-b76b-f58a26b42912-kube-api-access-6jcl5\") pod \"ironic-operator-controller-manager-554564d7fc-dgldv\" (UID: \"8629b5e4-e6a8-4c47-b76b-f58a26b42912\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.785387 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8t8k\" (UniqueName: \"kubernetes.io/projected/e9e07b56-2724-4046-8a60-81b751fb0588-kube-api-access-z8t8k\") pod \"manila-operator-controller-manager-54f6768c69-ldrx5\" (UID: \"e9e07b56-2724-4046-8a60-81b751fb0588\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.792854 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5jhp\" (UniqueName: \"kubernetes.io/projected/996bfd61-486b-432d-9e09-d3a90ff9124c-kube-api-access-h5jhp\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.799424 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.804558 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.815431 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.829977 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.830965 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.835963 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-kttx7" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.850984 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.852113 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrjml\" (UniqueName: \"kubernetes.io/projected/b123191d-e55b-4ddc-90ea-abcb34c97be2-kube-api-access-vrjml\") pod \"neutron-operator-controller-manager-64ddbf8bb-27hfc\" (UID: \"b123191d-e55b-4ddc-90ea-abcb34c97be2\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.852201 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp425\" (UniqueName: \"kubernetes.io/projected/1e872b11-03d6-4d3f-8e06-e10e1e73d917-kube-api-access-lp425\") pod \"mariadb-operator-controller-manager-6994f66f48-rpbt8\" (UID: \"1e872b11-03d6-4d3f-8e06-e10e1e73d917\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.852235 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg8vb\" (UniqueName: \"kubernetes.io/projected/10e6fa53-581b-4965-8a38-c70a5c61c6d7-kube-api-access-kg8vb\") pod \"ovn-operator-controller-manager-d44cf6b75-25qtj\" (UID: \"10e6fa53-581b-4965-8a38-c70a5c61c6d7\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.852294 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hv98\" (UniqueName: \"kubernetes.io/projected/ef60eda4-7ead-499b-b70f-07a34574096f-kube-api-access-7hv98\") pod \"octavia-operator-controller-manager-69f8888797-pqrs7\" (UID: \"ef60eda4-7ead-499b-b70f-07a34574096f\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.852412 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf7vn\" (UniqueName: \"kubernetes.io/projected/457202a7-ae9f-4d06-8690-d220e532b305-kube-api-access-xf7vn\") pod \"nova-operator-controller-manager-567668f5cf-l65c5\" (UID: \"457202a7-ae9f-4d06-8690-d220e532b305\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.859790 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.863268 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.875712 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.881272 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.882430 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.890207 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.892904 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-25nnl" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.894938 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrjml\" (UniqueName: \"kubernetes.io/projected/b123191d-e55b-4ddc-90ea-abcb34c97be2-kube-api-access-vrjml\") pod \"neutron-operator-controller-manager-64ddbf8bb-27hfc\" (UID: \"b123191d-e55b-4ddc-90ea-abcb34c97be2\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.895779 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp425\" (UniqueName: \"kubernetes.io/projected/1e872b11-03d6-4d3f-8e06-e10e1e73d917-kube-api-access-lp425\") pod \"mariadb-operator-controller-manager-6994f66f48-rpbt8\" (UID: \"1e872b11-03d6-4d3f-8e06-e10e1e73d917\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.925354 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.926323 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.926456 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.930358 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.932356 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dzkxr" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.935696 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.955583 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpcsc\" (UniqueName: \"kubernetes.io/projected/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-kube-api-access-kpcsc\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.955635 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rklzj\" (UniqueName: \"kubernetes.io/projected/c55ed223-371b-409a-bcb6-8ca6d2a3c908-kube-api-access-rklzj\") pod \"swift-operator-controller-manager-68f46476f-6hfg4\" (UID: \"c55ed223-371b-409a-bcb6-8ca6d2a3c908\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.955662 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg8vb\" (UniqueName: \"kubernetes.io/projected/10e6fa53-581b-4965-8a38-c70a5c61c6d7-kube-api-access-kg8vb\") pod \"ovn-operator-controller-manager-d44cf6b75-25qtj\" (UID: \"10e6fa53-581b-4965-8a38-c70a5c61c6d7\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.955701 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwtvn\" (UniqueName: \"kubernetes.io/projected/08a4f79c-e42e-4609-b104-01b9a05ac95a-kube-api-access-fwtvn\") pod \"placement-operator-controller-manager-8497b45c89-nlqtw\" (UID: \"08a4f79c-e42e-4609-b104-01b9a05ac95a\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.955729 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.955753 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hv98\" (UniqueName: \"kubernetes.io/projected/ef60eda4-7ead-499b-b70f-07a34574096f-kube-api-access-7hv98\") pod \"octavia-operator-controller-manager-69f8888797-pqrs7\" (UID: \"ef60eda4-7ead-499b-b70f-07a34574096f\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.955779 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz6h7\" (UniqueName: \"kubernetes.io/projected/49d66f3b-e451-4b73-bc6a-4b854a71a4d6-kube-api-access-bz6h7\") pod \"telemetry-operator-controller-manager-7f45b4ff68-qjpw6\" (UID: \"49d66f3b-e451-4b73-bc6a-4b854a71a4d6\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.955810 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf7vn\" (UniqueName: \"kubernetes.io/projected/457202a7-ae9f-4d06-8690-d220e532b305-kube-api-access-xf7vn\") pod \"nova-operator-controller-manager-567668f5cf-l65c5\" (UID: \"457202a7-ae9f-4d06-8690-d220e532b305\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.973572 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hv98\" (UniqueName: \"kubernetes.io/projected/ef60eda4-7ead-499b-b70f-07a34574096f-kube-api-access-7hv98\") pod \"octavia-operator-controller-manager-69f8888797-pqrs7\" (UID: \"ef60eda4-7ead-499b-b70f-07a34574096f\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.974013 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg8vb\" (UniqueName: \"kubernetes.io/projected/10e6fa53-581b-4965-8a38-c70a5c61c6d7-kube-api-access-kg8vb\") pod \"ovn-operator-controller-manager-d44cf6b75-25qtj\" (UID: \"10e6fa53-581b-4965-8a38-c70a5c61c6d7\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.976279 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-pcpk8"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.977173 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.985382 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-pcpk8"] Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.990746 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-zsgtv" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.994507 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" Feb 19 05:39:43 crc kubenswrapper[5012]: I0219 05:39:43.994793 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf7vn\" (UniqueName: \"kubernetes.io/projected/457202a7-ae9f-4d06-8690-d220e532b305-kube-api-access-xf7vn\") pod \"nova-operator-controller-manager-567668f5cf-l65c5\" (UID: \"457202a7-ae9f-4d06-8690-d220e532b305\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.012779 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47"] Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.013694 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.021149 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-vm67g" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.025240 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47"] Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.057481 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpcsc\" (UniqueName: \"kubernetes.io/projected/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-kube-api-access-kpcsc\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.057541 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rklzj\" (UniqueName: \"kubernetes.io/projected/c55ed223-371b-409a-bcb6-8ca6d2a3c908-kube-api-access-rklzj\") pod \"swift-operator-controller-manager-68f46476f-6hfg4\" (UID: \"c55ed223-371b-409a-bcb6-8ca6d2a3c908\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.057585 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwtvn\" (UniqueName: \"kubernetes.io/projected/08a4f79c-e42e-4609-b104-01b9a05ac95a-kube-api-access-fwtvn\") pod \"placement-operator-controller-manager-8497b45c89-nlqtw\" (UID: \"08a4f79c-e42e-4609-b104-01b9a05ac95a\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.057609 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.057636 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz6h7\" (UniqueName: \"kubernetes.io/projected/49d66f3b-e451-4b73-bc6a-4b854a71a4d6-kube-api-access-bz6h7\") pod \"telemetry-operator-controller-manager-7f45b4ff68-qjpw6\" (UID: \"49d66f3b-e451-4b73-bc6a-4b854a71a4d6\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.060055 5012 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.060143 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert podName:d6eb3922-90e6-4bb1-8caa-aac6b69c76b0 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:44.560124947 +0000 UTC m=+880.593447516 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" (UID: "d6eb3922-90e6-4bb1-8caa-aac6b69c76b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.076664 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.077485 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpcsc\" (UniqueName: \"kubernetes.io/projected/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-kube-api-access-kpcsc\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.081413 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rklzj\" (UniqueName: \"kubernetes.io/projected/c55ed223-371b-409a-bcb6-8ca6d2a3c908-kube-api-access-rklzj\") pod \"swift-operator-controller-manager-68f46476f-6hfg4\" (UID: \"c55ed223-371b-409a-bcb6-8ca6d2a3c908\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.082397 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz6h7\" (UniqueName: \"kubernetes.io/projected/49d66f3b-e451-4b73-bc6a-4b854a71a4d6-kube-api-access-bz6h7\") pod \"telemetry-operator-controller-manager-7f45b4ff68-qjpw6\" (UID: \"49d66f3b-e451-4b73-bc6a-4b854a71a4d6\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.099965 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwtvn\" (UniqueName: \"kubernetes.io/projected/08a4f79c-e42e-4609-b104-01b9a05ac95a-kube-api-access-fwtvn\") pod \"placement-operator-controller-manager-8497b45c89-nlqtw\" (UID: \"08a4f79c-e42e-4609-b104-01b9a05ac95a\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.113678 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.155137 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.155676 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n"] Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.157816 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.165960 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrmp2\" (UniqueName: \"kubernetes.io/projected/739941d0-4bff-4dae-8f01-636386a37dd0-kube-api-access-mrmp2\") pod \"watcher-operator-controller-manager-5db88f68c-z5r47\" (UID: \"739941d0-4bff-4dae-8f01-636386a37dd0\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.166098 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjlk4\" (UniqueName: \"kubernetes.io/projected/d1f124a8-4132-458d-a5a5-1839d31e7772-kube-api-access-gjlk4\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.166251 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f97jw\" (UniqueName: \"kubernetes.io/projected/73e25e30-860d-4faf-b1f3-bc284f7189d1-kube-api-access-f97jw\") pod \"test-operator-controller-manager-7866795846-pcpk8\" (UID: \"73e25e30-860d-4faf-b1f3-bc284f7189d1\") " pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.167330 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.172402 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.196744 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.197030 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.194636 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n"] Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.178086 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.173453 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.174992 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-d8sxf" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.209547 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.257586 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.300440 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w"] Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.301395 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.301729 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w"] Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.301906 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.301961 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.302027 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjlk4\" (UniqueName: \"kubernetes.io/projected/d1f124a8-4132-458d-a5a5-1839d31e7772-kube-api-access-gjlk4\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.302050 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrmp2\" (UniqueName: \"kubernetes.io/projected/739941d0-4bff-4dae-8f01-636386a37dd0-kube-api-access-mrmp2\") pod \"watcher-operator-controller-manager-5db88f68c-z5r47\" (UID: \"739941d0-4bff-4dae-8f01-636386a37dd0\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.302077 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.302100 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f97jw\" (UniqueName: \"kubernetes.io/projected/73e25e30-860d-4faf-b1f3-bc284f7189d1-kube-api-access-f97jw\") pod \"test-operator-controller-manager-7866795846-pcpk8\" (UID: \"73e25e30-860d-4faf-b1f3-bc284f7189d1\") " pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.303091 5012 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.303137 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:44.803120042 +0000 UTC m=+880.836442611 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "metrics-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.303477 5012 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.303524 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:44.803506501 +0000 UTC m=+880.836829070 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "webhook-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.303566 5012 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.303585 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert podName:996bfd61-486b-432d-9e09-d3a90ff9124c nodeName:}" failed. No retries permitted until 2026-02-19 05:39:45.303578563 +0000 UTC m=+881.336901132 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert") pod "infra-operator-controller-manager-79d975b745-cp8kx" (UID: "996bfd61-486b-432d-9e09-d3a90ff9124c") : secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.306703 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-tj57r" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.315773 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp"] Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.322665 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrmp2\" (UniqueName: \"kubernetes.io/projected/739941d0-4bff-4dae-8f01-636386a37dd0-kube-api-access-mrmp2\") pod \"watcher-operator-controller-manager-5db88f68c-z5r47\" (UID: \"739941d0-4bff-4dae-8f01-636386a37dd0\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.331931 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjlk4\" (UniqueName: \"kubernetes.io/projected/d1f124a8-4132-458d-a5a5-1839d31e7772-kube-api-access-gjlk4\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.336120 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f97jw\" (UniqueName: \"kubernetes.io/projected/73e25e30-860d-4faf-b1f3-bc284f7189d1-kube-api-access-f97jw\") pod \"test-operator-controller-manager-7866795846-pcpk8\" (UID: \"73e25e30-860d-4faf-b1f3-bc284f7189d1\") " pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.352995 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv"] Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.363905 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.403250 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmv74\" (UniqueName: \"kubernetes.io/projected/4a3cde05-282a-4c65-9570-74d04c71a034-kube-api-access-nmv74\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mqc2w\" (UID: \"4a3cde05-282a-4c65-9570-74d04c71a034\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.404506 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.435106 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.435153 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.503445 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x"] Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.504454 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmv74\" (UniqueName: \"kubernetes.io/projected/4a3cde05-282a-4c65-9570-74d04c71a034-kube-api-access-nmv74\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mqc2w\" (UID: \"4a3cde05-282a-4c65-9570-74d04c71a034\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.606966 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.607124 5012 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.607171 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert podName:d6eb3922-90e6-4bb1-8caa-aac6b69c76b0 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:45.607157193 +0000 UTC m=+881.640479762 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" (UID: "d6eb3922-90e6-4bb1-8caa-aac6b69c76b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.611592 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmv74\" (UniqueName: \"kubernetes.io/projected/4a3cde05-282a-4c65-9570-74d04c71a034-kube-api-access-nmv74\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mqc2w\" (UID: \"4a3cde05-282a-4c65-9570-74d04c71a034\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.615843 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.616250 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw"] Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.673394 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" Feb 19 05:39:44 crc kubenswrapper[5012]: W0219 05:39:44.688122 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b3edb91_d9bc_4f6f_9cf5_5d40f05bf3be.slice/crio-04b965a1d44ec7156a02b6ebefe193ac111779d07a2f8efd2f7b6560532a1261 WatchSource:0}: Error finding container 04b965a1d44ec7156a02b6ebefe193ac111779d07a2f8efd2f7b6560532a1261: Status 404 returned error can't find the container with id 04b965a1d44ec7156a02b6ebefe193ac111779d07a2f8efd2f7b6560532a1261 Feb 19 05:39:44 crc kubenswrapper[5012]: W0219 05:39:44.691956 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11d49fcd_6e31_47e5_84a1_c6ae972e13cb.slice/crio-4550c81aa85b109fc7362492bebfcc80ab165396c0523f653f1a8fcca7cd1287 WatchSource:0}: Error finding container 4550c81aa85b109fc7362492bebfcc80ab165396c0523f653f1a8fcca7cd1287: Status 404 returned error can't find the container with id 4550c81aa85b109fc7362492bebfcc80ab165396c0523f653f1a8fcca7cd1287 Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.810194 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:44 crc kubenswrapper[5012]: I0219 05:39:44.810558 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.812825 5012 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.812875 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:45.812860549 +0000 UTC m=+881.846183118 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "webhook-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.813501 5012 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 05:39:44 crc kubenswrapper[5012]: E0219 05:39:44.813549 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:45.813533675 +0000 UTC m=+881.846856244 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "metrics-server-cert" not found Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.002221 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5"] Feb 19 05:39:45 crc kubenswrapper[5012]: W0219 05:39:45.011751 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9e07b56_2724_4046_8a60_81b751fb0588.slice/crio-6bdfb2b40753443c7b1d250190983f4ab67ab73262aa89222bc8a40efd763009 WatchSource:0}: Error finding container 6bdfb2b40753443c7b1d250190983f4ab67ab73262aa89222bc8a40efd763009: Status 404 returned error can't find the container with id 6bdfb2b40753443c7b1d250190983f4ab67ab73262aa89222bc8a40efd763009 Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.048887 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx"] Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.105524 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv"] Feb 19 05:39:45 crc kubenswrapper[5012]: W0219 05:39:45.111999 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8629b5e4_e6a8_4c47_b76b_f58a26b42912.slice/crio-dfe9e98df34d2a579dbd4d4cb78781090d22da89b2c47977e6a49d95a4098d34 WatchSource:0}: Error finding container dfe9e98df34d2a579dbd4d4cb78781090d22da89b2c47977e6a49d95a4098d34: Status 404 returned error can't find the container with id dfe9e98df34d2a579dbd4d4cb78781090d22da89b2c47977e6a49d95a4098d34 Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.130977 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n"] Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.140008 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc"] Feb 19 05:39:45 crc kubenswrapper[5012]: W0219 05:39:45.141890 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cc1b41b_fbf6_4d0c_b721_dcad09c03feb.slice/crio-76125640d3b91baa619b96b41ac081d093fa1637b1f1f96011116e378516421c WatchSource:0}: Error finding container 76125640d3b91baa619b96b41ac081d093fa1637b1f1f96011116e378516421c: Status 404 returned error can't find the container with id 76125640d3b91baa619b96b41ac081d093fa1637b1f1f96011116e378516421c Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.145468 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8"] Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.315648 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.315832 5012 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.315886 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert podName:996bfd61-486b-432d-9e09-d3a90ff9124c nodeName:}" failed. No retries permitted until 2026-02-19 05:39:47.315870724 +0000 UTC m=+883.349193293 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert") pod "infra-operator-controller-manager-79d975b745-cp8kx" (UID: "996bfd61-486b-432d-9e09-d3a90ff9124c") : secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.322068 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5" event={"ID":"e9e07b56-2724-4046-8a60-81b751fb0588","Type":"ContainerStarted","Data":"6bdfb2b40753443c7b1d250190983f4ab67ab73262aa89222bc8a40efd763009"} Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.328477 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj"] Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.334065 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" event={"ID":"0cc1b41b-fbf6-4d0c-b721-dcad09c03feb","Type":"ContainerStarted","Data":"76125640d3b91baa619b96b41ac081d093fa1637b1f1f96011116e378516421c"} Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.335034 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6"] Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.340643 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7"] Feb 19 05:39:45 crc kubenswrapper[5012]: W0219 05:39:45.344519 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10e6fa53_581b_4965_8a38_c70a5c61c6d7.slice/crio-28860c4863093c32d12a4a6a58e8376a1127942d6a6b57864c8f33e0d6731121 WatchSource:0}: Error finding container 28860c4863093c32d12a4a6a58e8376a1127942d6a6b57864c8f33e0d6731121: Status 404 returned error can't find the container with id 28860c4863093c32d12a4a6a58e8376a1127942d6a6b57864c8f33e0d6731121 Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.350952 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6"] Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.350986 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" event={"ID":"8629b5e4-e6a8-4c47-b76b-f58a26b42912","Type":"ContainerStarted","Data":"dfe9e98df34d2a579dbd4d4cb78781090d22da89b2c47977e6a49d95a4098d34"} Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.352398 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" event={"ID":"dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43","Type":"ContainerStarted","Data":"1bef771e352cf5e8c82b4ae4872bc4d5992083e4f205de0a4ac903c26530988e"} Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.352468 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw"] Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.353379 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp" event={"ID":"bfca307c-9b00-4c12-bdd6-a394b7cc7cfd","Type":"ContainerStarted","Data":"544ef579d1e51bbd16a64d2df2d8493aed6d7ff93c4852d86e3d6fb0786b05a6"} Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.357831 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw" event={"ID":"11d49fcd-6e31-47e5-84a1-c6ae972e13cb","Type":"ContainerStarted","Data":"4550c81aa85b109fc7362492bebfcc80ab165396c0523f653f1a8fcca7cd1287"} Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.358710 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc" event={"ID":"b123191d-e55b-4ddc-90ea-abcb34c97be2","Type":"ContainerStarted","Data":"809aed11609ee8d1d19aef2d9e34018c8adb897dd55aa5e6975b2071e559e959"} Feb 19 05:39:45 crc kubenswrapper[5012]: W0219 05:39:45.358824 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08a4f79c_e42e_4609_b104_01b9a05ac95a.slice/crio-af0cf467b90a785a8db5fe5e9139dfd99f6f63c686785298c2075d602de149d3 WatchSource:0}: Error finding container af0cf467b90a785a8db5fe5e9139dfd99f6f63c686785298c2075d602de149d3: Status 404 returned error can't find the container with id af0cf467b90a785a8db5fe5e9139dfd99f6f63c686785298c2075d602de149d3 Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.359495 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5"] Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.361499 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8" event={"ID":"1e872b11-03d6-4d3f-8e06-e10e1e73d917","Type":"ContainerStarted","Data":"0042f4c98fe7aa4b93329f86065d478f825bc27a14c77266fa74b1c3feae03f7"} Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.364421 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv" event={"ID":"8af03a54-ad7a-4684-b5a6-ba83f410e6ed","Type":"ContainerStarted","Data":"c517c1f32113b9b24e196a0813209ed6df8ce8b867be34d0f98f119c6be187a0"} Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.367609 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" event={"ID":"8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be","Type":"ContainerStarted","Data":"04b965a1d44ec7156a02b6ebefe193ac111779d07a2f8efd2f7b6560532a1261"} Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.380589 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wwrcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-csct6_openstack-operators(4f281b5b-b656-4d4a-b628-d4bfe4fc94f9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 05:39:45 crc kubenswrapper[5012]: W0219 05:39:45.381056 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef60eda4_7ead_499b_b70f_07a34574096f.slice/crio-5cc777d9a45a187848c8e7aad90fa31e325037e623c8bfbec46563c614780937 WatchSource:0}: Error finding container 5cc777d9a45a187848c8e7aad90fa31e325037e623c8bfbec46563c614780937: Status 404 returned error can't find the container with id 5cc777d9a45a187848c8e7aad90fa31e325037e623c8bfbec46563c614780937 Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.382148 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" podUID="4f281b5b-b656-4d4a-b628-d4bfe4fc94f9" Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.393937 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7hv98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-pqrs7_openstack-operators(ef60eda4-7ead-499b-b70f-07a34574096f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.395685 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" podUID="ef60eda4-7ead-499b-b70f-07a34574096f" Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.460766 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4"] Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.469138 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47"] Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.476939 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w"] Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.477429 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mrmp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-z5r47_openstack-operators(739941d0-4bff-4dae-8f01-636386a37dd0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.478920 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" podUID="739941d0-4bff-4dae-8f01-636386a37dd0" Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.482843 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-pcpk8"] Feb 19 05:39:45 crc kubenswrapper[5012]: W0219 05:39:45.484253 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a3cde05_282a_4c65_9570_74d04c71a034.slice/crio-3e2225296f9d9a7e0dc5481e11a191da5afcbeca118dc2ffa47f3c89fff56545 WatchSource:0}: Error finding container 3e2225296f9d9a7e0dc5481e11a191da5afcbeca118dc2ffa47f3c89fff56545: Status 404 returned error can't find the container with id 3e2225296f9d9a7e0dc5481e11a191da5afcbeca118dc2ffa47f3c89fff56545 Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.487371 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nmv74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-mqc2w_openstack-operators(4a3cde05-282a-4c65-9570-74d04c71a034): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.488541 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" podUID="4a3cde05-282a-4c65-9570-74d04c71a034" Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.491952 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f97jw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-pcpk8_openstack-operators(73e25e30-860d-4faf-b1f3-bc284f7189d1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.494678 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" podUID="73e25e30-860d-4faf-b1f3-bc284f7189d1" Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.620148 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.620330 5012 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.620393 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert podName:d6eb3922-90e6-4bb1-8caa-aac6b69c76b0 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:47.620377416 +0000 UTC m=+883.653699985 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" (UID: "d6eb3922-90e6-4bb1-8caa-aac6b69c76b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.833547 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:45 crc kubenswrapper[5012]: I0219 05:39:45.834451 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.834706 5012 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.834777 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:47.834762875 +0000 UTC m=+883.868085434 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "webhook-server-cert" not found Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.836133 5012 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 05:39:45 crc kubenswrapper[5012]: E0219 05:39:45.836167 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:47.836159149 +0000 UTC m=+883.869481718 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "metrics-server-cert" not found Feb 19 05:39:46 crc kubenswrapper[5012]: I0219 05:39:46.381272 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" event={"ID":"457202a7-ae9f-4d06-8690-d220e532b305","Type":"ContainerStarted","Data":"6177ccacda2cb011d9b0dbbb542a849d475f022c0131ea89a1858e858cd5077c"} Feb 19 05:39:46 crc kubenswrapper[5012]: I0219 05:39:46.383789 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj" event={"ID":"10e6fa53-581b-4965-8a38-c70a5c61c6d7","Type":"ContainerStarted","Data":"28860c4863093c32d12a4a6a58e8376a1127942d6a6b57864c8f33e0d6731121"} Feb 19 05:39:46 crc kubenswrapper[5012]: I0219 05:39:46.386192 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" event={"ID":"4a3cde05-282a-4c65-9570-74d04c71a034","Type":"ContainerStarted","Data":"3e2225296f9d9a7e0dc5481e11a191da5afcbeca118dc2ffa47f3c89fff56545"} Feb 19 05:39:46 crc kubenswrapper[5012]: I0219 05:39:46.392135 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" event={"ID":"4f281b5b-b656-4d4a-b628-d4bfe4fc94f9","Type":"ContainerStarted","Data":"9e68082543b88eb8692c2971ce037e9fbe73463f2950d28f7d766fdc47355f5d"} Feb 19 05:39:46 crc kubenswrapper[5012]: E0219 05:39:46.394125 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" podUID="4f281b5b-b656-4d4a-b628-d4bfe4fc94f9" Feb 19 05:39:46 crc kubenswrapper[5012]: I0219 05:39:46.395478 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw" event={"ID":"08a4f79c-e42e-4609-b104-01b9a05ac95a","Type":"ContainerStarted","Data":"af0cf467b90a785a8db5fe5e9139dfd99f6f63c686785298c2075d602de149d3"} Feb 19 05:39:46 crc kubenswrapper[5012]: I0219 05:39:46.396962 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" event={"ID":"49d66f3b-e451-4b73-bc6a-4b854a71a4d6","Type":"ContainerStarted","Data":"5c8ea10b6114011fa0d4d80e27fa65b7a59bb00725ae56c16c0f2ef7a012c48d"} Feb 19 05:39:46 crc kubenswrapper[5012]: E0219 05:39:46.397733 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" podUID="4a3cde05-282a-4c65-9570-74d04c71a034" Feb 19 05:39:46 crc kubenswrapper[5012]: I0219 05:39:46.401441 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" event={"ID":"739941d0-4bff-4dae-8f01-636386a37dd0","Type":"ContainerStarted","Data":"5efab640d65cac0525e62cc953e3c450515f956276b7be6332d4a135bc77b341"} Feb 19 05:39:46 crc kubenswrapper[5012]: I0219 05:39:46.402629 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" event={"ID":"ef60eda4-7ead-499b-b70f-07a34574096f","Type":"ContainerStarted","Data":"5cc777d9a45a187848c8e7aad90fa31e325037e623c8bfbec46563c614780937"} Feb 19 05:39:46 crc kubenswrapper[5012]: E0219 05:39:46.402903 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" podUID="739941d0-4bff-4dae-8f01-636386a37dd0" Feb 19 05:39:46 crc kubenswrapper[5012]: E0219 05:39:46.404355 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" podUID="ef60eda4-7ead-499b-b70f-07a34574096f" Feb 19 05:39:46 crc kubenswrapper[5012]: I0219 05:39:46.404701 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" event={"ID":"c55ed223-371b-409a-bcb6-8ca6d2a3c908","Type":"ContainerStarted","Data":"75b0635431c48105bf4783209996d0f1630c0d67ceb2343139c64539cb777c14"} Feb 19 05:39:46 crc kubenswrapper[5012]: I0219 05:39:46.405601 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" event={"ID":"73e25e30-860d-4faf-b1f3-bc284f7189d1","Type":"ContainerStarted","Data":"4f552f87075ec49e67fc4271c11ee0d9390ff98eca4aeb8198617b81efbec60b"} Feb 19 05:39:46 crc kubenswrapper[5012]: E0219 05:39:46.407068 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" podUID="73e25e30-860d-4faf-b1f3-bc284f7189d1" Feb 19 05:39:47 crc kubenswrapper[5012]: I0219 05:39:47.359155 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.359399 5012 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.360117 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert podName:996bfd61-486b-432d-9e09-d3a90ff9124c nodeName:}" failed. No retries permitted until 2026-02-19 05:39:51.359530691 +0000 UTC m=+887.392853260 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert") pod "infra-operator-controller-manager-79d975b745-cp8kx" (UID: "996bfd61-486b-432d-9e09-d3a90ff9124c") : secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.420416 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" podUID="4f281b5b-b656-4d4a-b628-d4bfe4fc94f9" Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.422888 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" podUID="ef60eda4-7ead-499b-b70f-07a34574096f" Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.422965 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" podUID="739941d0-4bff-4dae-8f01-636386a37dd0" Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.422989 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" podUID="4a3cde05-282a-4c65-9570-74d04c71a034" Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.423144 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" podUID="73e25e30-860d-4faf-b1f3-bc284f7189d1" Feb 19 05:39:47 crc kubenswrapper[5012]: I0219 05:39:47.665623 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.666420 5012 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.666576 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert podName:d6eb3922-90e6-4bb1-8caa-aac6b69c76b0 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:51.666557085 +0000 UTC m=+887.699879654 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" (UID: "d6eb3922-90e6-4bb1-8caa-aac6b69c76b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:47 crc kubenswrapper[5012]: I0219 05:39:47.868941 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:47 crc kubenswrapper[5012]: I0219 05:39:47.869613 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.869120 5012 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.869844 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:51.869821503 +0000 UTC m=+887.903144072 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "metrics-server-cert" not found Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.869703 5012 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 05:39:47 crc kubenswrapper[5012]: E0219 05:39:47.869979 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:51.869968046 +0000 UTC m=+887.903290615 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "webhook-server-cert" not found Feb 19 05:39:51 crc kubenswrapper[5012]: I0219 05:39:51.428346 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:51 crc kubenswrapper[5012]: E0219 05:39:51.428479 5012 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:51 crc kubenswrapper[5012]: E0219 05:39:51.428609 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert podName:996bfd61-486b-432d-9e09-d3a90ff9124c nodeName:}" failed. No retries permitted until 2026-02-19 05:39:59.428595781 +0000 UTC m=+895.461918350 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert") pod "infra-operator-controller-manager-79d975b745-cp8kx" (UID: "996bfd61-486b-432d-9e09-d3a90ff9124c") : secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:51 crc kubenswrapper[5012]: I0219 05:39:51.737627 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:51 crc kubenswrapper[5012]: E0219 05:39:51.741032 5012 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:51 crc kubenswrapper[5012]: E0219 05:39:51.745978 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert podName:d6eb3922-90e6-4bb1-8caa-aac6b69c76b0 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:59.745947966 +0000 UTC m=+895.779270535 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" (UID: "d6eb3922-90e6-4bb1-8caa-aac6b69c76b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:51 crc kubenswrapper[5012]: I0219 05:39:51.947362 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:51 crc kubenswrapper[5012]: I0219 05:39:51.947416 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:51 crc kubenswrapper[5012]: E0219 05:39:51.947599 5012 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 05:39:51 crc kubenswrapper[5012]: E0219 05:39:51.947649 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:59.947632794 +0000 UTC m=+895.980955363 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "webhook-server-cert" not found Feb 19 05:39:51 crc kubenswrapper[5012]: E0219 05:39:51.947754 5012 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 05:39:51 crc kubenswrapper[5012]: E0219 05:39:51.947900 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:39:59.94786227 +0000 UTC m=+895.981184899 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "metrics-server-cert" not found Feb 19 05:39:57 crc kubenswrapper[5012]: E0219 05:39:57.206683 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" Feb 19 05:39:57 crc kubenswrapper[5012]: E0219 05:39:57.207527 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vpjpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-868647ff47-xzk2n_openstack-operators(0cc1b41b-fbf6-4d0c-b721-dcad09c03feb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:39:57 crc kubenswrapper[5012]: E0219 05:39:57.208789 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" podUID="0cc1b41b-fbf6-4d0c-b721-dcad09c03feb" Feb 19 05:39:57 crc kubenswrapper[5012]: E0219 05:39:57.495078 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" podUID="0cc1b41b-fbf6-4d0c-b721-dcad09c03feb" Feb 19 05:39:57 crc kubenswrapper[5012]: E0219 05:39:57.834522 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 19 05:39:57 crc kubenswrapper[5012]: E0219 05:39:57.834721 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rklzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-6hfg4_openstack-operators(c55ed223-371b-409a-bcb6-8ca6d2a3c908): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:39:57 crc kubenswrapper[5012]: E0219 05:39:57.836553 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" podUID="c55ed223-371b-409a-bcb6-8ca6d2a3c908" Feb 19 05:39:58 crc kubenswrapper[5012]: E0219 05:39:58.502393 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" podUID="c55ed223-371b-409a-bcb6-8ca6d2a3c908" Feb 19 05:39:59 crc kubenswrapper[5012]: I0219 05:39:59.467182 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.467443 5012 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.467543 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert podName:996bfd61-486b-432d-9e09-d3a90ff9124c nodeName:}" failed. No retries permitted until 2026-02-19 05:40:15.467521344 +0000 UTC m=+911.500843923 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert") pod "infra-operator-controller-manager-79d975b745-cp8kx" (UID: "996bfd61-486b-432d-9e09-d3a90ff9124c") : secret "infra-operator-webhook-server-cert" not found Feb 19 05:39:59 crc kubenswrapper[5012]: I0219 05:39:59.772992 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.773212 5012 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.773747 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert podName:d6eb3922-90e6-4bb1-8caa-aac6b69c76b0 nodeName:}" failed. No retries permitted until 2026-02-19 05:40:15.773726478 +0000 UTC m=+911.807049057 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert") pod "openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" (UID: "d6eb3922-90e6-4bb1-8caa-aac6b69c76b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.919422 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.919694 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7k272,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987464f4-qzq7x_openstack-operators(8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.920908 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" podUID="8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be" Feb 19 05:39:59 crc kubenswrapper[5012]: I0219 05:39:59.978201 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:59 crc kubenswrapper[5012]: I0219 05:39:59.978329 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.978441 5012 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.978538 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:40:15.978513613 +0000 UTC m=+912.011836192 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "metrics-server-cert" not found Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.978597 5012 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 05:39:59 crc kubenswrapper[5012]: E0219 05:39:59.978691 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs podName:d1f124a8-4132-458d-a5a5-1839d31e7772 nodeName:}" failed. No retries permitted until 2026-02-19 05:40:15.978670227 +0000 UTC m=+912.011992806 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs") pod "openstack-operator-controller-manager-69ff7bc449-tj54n" (UID: "d1f124a8-4132-458d-a5a5-1839d31e7772") : secret "webhook-server-cert" not found Feb 19 05:40:00 crc kubenswrapper[5012]: E0219 05:40:00.516127 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df\\\"\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" podUID="8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be" Feb 19 05:40:00 crc kubenswrapper[5012]: E0219 05:40:00.707536 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99" Feb 19 05:40:00 crc kubenswrapper[5012]: E0219 05:40:00.707836 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bz6h7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7f45b4ff68-qjpw6_openstack-operators(49d66f3b-e451-4b73-bc6a-4b854a71a4d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:40:00 crc kubenswrapper[5012]: E0219 05:40:00.709156 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" podUID="49d66f3b-e451-4b73-bc6a-4b854a71a4d6" Feb 19 05:40:01 crc kubenswrapper[5012]: E0219 05:40:01.223047 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 19 05:40:01 crc kubenswrapper[5012]: E0219 05:40:01.223282 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6jcl5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-dgldv_openstack-operators(8629b5e4-e6a8-4c47-b76b-f58a26b42912): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:40:01 crc kubenswrapper[5012]: E0219 05:40:01.224559 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" podUID="8629b5e4-e6a8-4c47-b76b-f58a26b42912" Feb 19 05:40:01 crc kubenswrapper[5012]: E0219 05:40:01.525471 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" podUID="49d66f3b-e451-4b73-bc6a-4b854a71a4d6" Feb 19 05:40:01 crc kubenswrapper[5012]: E0219 05:40:01.527728 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" podUID="8629b5e4-e6a8-4c47-b76b-f58a26b42912" Feb 19 05:40:01 crc kubenswrapper[5012]: E0219 05:40:01.838588 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 19 05:40:01 crc kubenswrapper[5012]: E0219 05:40:01.838754 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r7rs7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-9zkvx_openstack-operators(dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:40:01 crc kubenswrapper[5012]: E0219 05:40:01.840018 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" podUID="dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43" Feb 19 05:40:02 crc kubenswrapper[5012]: E0219 05:40:02.529146 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" podUID="dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43" Feb 19 05:40:03 crc kubenswrapper[5012]: E0219 05:40:03.088052 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 19 05:40:03 crc kubenswrapper[5012]: E0219 05:40:03.088926 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xf7vn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-l65c5_openstack-operators(457202a7-ae9f-4d06-8690-d220e532b305): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:40:03 crc kubenswrapper[5012]: E0219 05:40:03.090276 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" podUID="457202a7-ae9f-4d06-8690-d220e532b305" Feb 19 05:40:03 crc kubenswrapper[5012]: E0219 05:40:03.537847 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" podUID="457202a7-ae9f-4d06-8690-d220e532b305" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.568669 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp" event={"ID":"bfca307c-9b00-4c12-bdd6-a394b7cc7cfd","Type":"ContainerStarted","Data":"ce9ba9dd3a9689fda25dde0374abb8eeef49a7a1b960f3335da940769d1dfb72"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.568938 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.570019 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" event={"ID":"739941d0-4bff-4dae-8f01-636386a37dd0","Type":"ContainerStarted","Data":"41e527e51cfa6c21b3b0a1d47834362ed3c08eda72c23067a5b83ed7da4219aa"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.570187 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.571293 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw" event={"ID":"11d49fcd-6e31-47e5-84a1-c6ae972e13cb","Type":"ContainerStarted","Data":"52b7b6abaff066152390196213b574b1a471cf22ba37646da14a6a1cd804c17c"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.571482 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.573282 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" event={"ID":"ef60eda4-7ead-499b-b70f-07a34574096f","Type":"ContainerStarted","Data":"c1ba4d205aa3ee79d3975e9041ff63bbcdfa0dba4991ea498b0e241ec8cd09c3"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.573459 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.580465 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc" event={"ID":"b123191d-e55b-4ddc-90ea-abcb34c97be2","Type":"ContainerStarted","Data":"2f7f8ffa24601d945eb9127adf1b9d648590be6416b6871afeb7c96a15fb7634"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.580588 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.585850 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8" event={"ID":"1e872b11-03d6-4d3f-8e06-e10e1e73d917","Type":"ContainerStarted","Data":"8454ddceb9f7c7432891d0362d364721444a3d3389baa49984ec90dd9bedcbfc"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.585983 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.587747 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw" event={"ID":"08a4f79c-e42e-4609-b104-01b9a05ac95a","Type":"ContainerStarted","Data":"13ded6e10183374ce42cdb2c87fb3d14305ffaa1c8f76c704c4b93262623a139"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.587807 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.588504 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp" podStartSLOduration=4.924003061 podStartE2EDuration="23.588494313s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:44.404278634 +0000 UTC m=+880.437601203" lastFinishedPulling="2026-02-19 05:40:03.068769886 +0000 UTC m=+899.102092455" observedRunningTime="2026-02-19 05:40:06.5875239 +0000 UTC m=+902.620846469" watchObservedRunningTime="2026-02-19 05:40:06.588494313 +0000 UTC m=+902.621816882" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.589863 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" event={"ID":"4f281b5b-b656-4d4a-b628-d4bfe4fc94f9","Type":"ContainerStarted","Data":"f3eebdfd0c380fa6d309b6b90709dd1d2ca1551ee3ebd735ab4b0cd719b74a47"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.590018 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.592510 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5" event={"ID":"e9e07b56-2724-4046-8a60-81b751fb0588","Type":"ContainerStarted","Data":"18666c2dd62284e9696eeae27bf48669d12cd0fa8fb141aa99cf14dfddb319a2"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.592849 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.593928 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj" event={"ID":"10e6fa53-581b-4965-8a38-c70a5c61c6d7","Type":"ContainerStarted","Data":"21830277d8d70a6ba7a48febc68f048f8f86cc5bc5285664e462d39d8897c7bb"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.594582 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.595578 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv" event={"ID":"8af03a54-ad7a-4684-b5a6-ba83f410e6ed","Type":"ContainerStarted","Data":"6f7a49a30d338c4b70774021b80f8d34e54904a95a68d1fe18abac136042c3c1"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.595910 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.604243 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" event={"ID":"73e25e30-860d-4faf-b1f3-bc284f7189d1","Type":"ContainerStarted","Data":"26fbdc9144715d1125ed43062df9ba915be8d4469b9a861910822792449590b8"} Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.604424 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.605109 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8" podStartSLOduration=6.929182781 podStartE2EDuration="23.605094478s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.146677665 +0000 UTC m=+881.180000234" lastFinishedPulling="2026-02-19 05:40:01.822589362 +0000 UTC m=+897.855911931" observedRunningTime="2026-02-19 05:40:06.599296186 +0000 UTC m=+902.632618755" watchObservedRunningTime="2026-02-19 05:40:06.605094478 +0000 UTC m=+902.638417047" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.614970 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc" podStartSLOduration=3.49723525 podStartE2EDuration="23.614958618s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.141750035 +0000 UTC m=+881.175072594" lastFinishedPulling="2026-02-19 05:40:05.259473403 +0000 UTC m=+901.292795962" observedRunningTime="2026-02-19 05:40:06.612693962 +0000 UTC m=+902.646016531" watchObservedRunningTime="2026-02-19 05:40:06.614958618 +0000 UTC m=+902.648281187" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.631806 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw" podStartSLOduration=6.505899736 podStartE2EDuration="23.631790097s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:44.696725832 +0000 UTC m=+880.730048411" lastFinishedPulling="2026-02-19 05:40:01.822616203 +0000 UTC m=+897.855938772" observedRunningTime="2026-02-19 05:40:06.62819816 +0000 UTC m=+902.661520729" watchObservedRunningTime="2026-02-19 05:40:06.631790097 +0000 UTC m=+902.665112666" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.648613 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" podStartSLOduration=3.7097856350000002 podStartE2EDuration="23.648600127s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.39376248 +0000 UTC m=+881.427085049" lastFinishedPulling="2026-02-19 05:40:05.332576972 +0000 UTC m=+901.365899541" observedRunningTime="2026-02-19 05:40:06.647034338 +0000 UTC m=+902.680356907" watchObservedRunningTime="2026-02-19 05:40:06.648600127 +0000 UTC m=+902.681922696" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.665114 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" podStartSLOduration=3.74742978 podStartE2EDuration="23.665101428s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.477276053 +0000 UTC m=+881.510598622" lastFinishedPulling="2026-02-19 05:40:05.394947701 +0000 UTC m=+901.428270270" observedRunningTime="2026-02-19 05:40:06.664968585 +0000 UTC m=+902.698291154" watchObservedRunningTime="2026-02-19 05:40:06.665101428 +0000 UTC m=+902.698423997" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.682863 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj" podStartSLOduration=5.427503557 podStartE2EDuration="23.68284722s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.352193878 +0000 UTC m=+881.385516447" lastFinishedPulling="2026-02-19 05:40:03.607537541 +0000 UTC m=+899.640860110" observedRunningTime="2026-02-19 05:40:06.681121238 +0000 UTC m=+902.714443807" watchObservedRunningTime="2026-02-19 05:40:06.68284722 +0000 UTC m=+902.716169789" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.702030 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw" podStartSLOduration=4.798391923 podStartE2EDuration="23.702011267s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.360750216 +0000 UTC m=+881.394072775" lastFinishedPulling="2026-02-19 05:40:04.26436955 +0000 UTC m=+900.297692119" observedRunningTime="2026-02-19 05:40:06.694111664 +0000 UTC m=+902.727434233" watchObservedRunningTime="2026-02-19 05:40:06.702011267 +0000 UTC m=+902.735333836" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.742050 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" podStartSLOduration=3.928461377 podStartE2EDuration="23.742032371s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.491765395 +0000 UTC m=+881.525087954" lastFinishedPulling="2026-02-19 05:40:05.305336379 +0000 UTC m=+901.338658948" observedRunningTime="2026-02-19 05:40:06.716599142 +0000 UTC m=+902.749921701" watchObservedRunningTime="2026-02-19 05:40:06.742032371 +0000 UTC m=+902.775354940" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.743512 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv" podStartSLOduration=6.407677267 podStartE2EDuration="23.743505817s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:44.486766792 +0000 UTC m=+880.520089361" lastFinishedPulling="2026-02-19 05:40:01.822595342 +0000 UTC m=+897.855917911" observedRunningTime="2026-02-19 05:40:06.738955446 +0000 UTC m=+902.772278015" watchObservedRunningTime="2026-02-19 05:40:06.743505817 +0000 UTC m=+902.776828386" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.781835 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" podStartSLOduration=3.85547489 podStartE2EDuration="23.781816619s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.380408005 +0000 UTC m=+881.413730574" lastFinishedPulling="2026-02-19 05:40:05.306749734 +0000 UTC m=+901.340072303" observedRunningTime="2026-02-19 05:40:06.763583116 +0000 UTC m=+902.796905685" watchObservedRunningTime="2026-02-19 05:40:06.781816619 +0000 UTC m=+902.815139188" Feb 19 05:40:06 crc kubenswrapper[5012]: I0219 05:40:06.782904 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5" podStartSLOduration=6.337480927 podStartE2EDuration="23.782898196s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.013552484 +0000 UTC m=+881.046875053" lastFinishedPulling="2026-02-19 05:40:02.458969743 +0000 UTC m=+898.492292322" observedRunningTime="2026-02-19 05:40:06.781537623 +0000 UTC m=+902.814860192" watchObservedRunningTime="2026-02-19 05:40:06.782898196 +0000 UTC m=+902.816220765" Feb 19 05:40:08 crc kubenswrapper[5012]: I0219 05:40:08.852432 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cxhkh"] Feb 19 05:40:08 crc kubenswrapper[5012]: I0219 05:40:08.856480 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:08 crc kubenswrapper[5012]: I0219 05:40:08.863368 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cxhkh"] Feb 19 05:40:08 crc kubenswrapper[5012]: I0219 05:40:08.930518 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-utilities\") pod \"community-operators-cxhkh\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:08 crc kubenswrapper[5012]: I0219 05:40:08.930603 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-catalog-content\") pod \"community-operators-cxhkh\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:08 crc kubenswrapper[5012]: I0219 05:40:08.930626 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcmmp\" (UniqueName: \"kubernetes.io/projected/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-kube-api-access-pcmmp\") pod \"community-operators-cxhkh\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:09 crc kubenswrapper[5012]: I0219 05:40:09.032041 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-catalog-content\") pod \"community-operators-cxhkh\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:09 crc kubenswrapper[5012]: I0219 05:40:09.032079 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcmmp\" (UniqueName: \"kubernetes.io/projected/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-kube-api-access-pcmmp\") pod \"community-operators-cxhkh\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:09 crc kubenswrapper[5012]: I0219 05:40:09.032199 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-utilities\") pod \"community-operators-cxhkh\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:09 crc kubenswrapper[5012]: I0219 05:40:09.032693 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-catalog-content\") pod \"community-operators-cxhkh\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:09 crc kubenswrapper[5012]: I0219 05:40:09.032721 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-utilities\") pod \"community-operators-cxhkh\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:09 crc kubenswrapper[5012]: I0219 05:40:09.053336 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcmmp\" (UniqueName: \"kubernetes.io/projected/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-kube-api-access-pcmmp\") pod \"community-operators-cxhkh\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:09 crc kubenswrapper[5012]: I0219 05:40:09.178619 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:09 crc kubenswrapper[5012]: I0219 05:40:09.646753 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" event={"ID":"4a3cde05-282a-4c65-9570-74d04c71a034","Type":"ContainerStarted","Data":"db85e8efce83c4aaf4a4ce23309b971ff444610da3fcb1309b7fbd49329e16ad"} Feb 19 05:40:09 crc kubenswrapper[5012]: I0219 05:40:09.647609 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cxhkh"] Feb 19 05:40:09 crc kubenswrapper[5012]: W0219 05:40:09.669590 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde3ffb30_20cd_4e13_a51c_9d159b1ac3c4.slice/crio-a2c7c4a19968ec934eb2d82b6fd6c582a0e3ba562c22d8eb6cdb58faa6826649 WatchSource:0}: Error finding container a2c7c4a19968ec934eb2d82b6fd6c582a0e3ba562c22d8eb6cdb58faa6826649: Status 404 returned error can't find the container with id a2c7c4a19968ec934eb2d82b6fd6c582a0e3ba562c22d8eb6cdb58faa6826649 Feb 19 05:40:10 crc kubenswrapper[5012]: I0219 05:40:10.655355 5012 generic.go:334] "Generic (PLEG): container finished" podID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerID="63155282e6f7bbbff3f0ae2f5fef6845f4cb91c8ffd7d7fe82a1ae65c034544e" exitCode=0 Feb 19 05:40:10 crc kubenswrapper[5012]: I0219 05:40:10.655496 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxhkh" event={"ID":"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4","Type":"ContainerDied","Data":"63155282e6f7bbbff3f0ae2f5fef6845f4cb91c8ffd7d7fe82a1ae65c034544e"} Feb 19 05:40:10 crc kubenswrapper[5012]: I0219 05:40:10.655733 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxhkh" event={"ID":"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4","Type":"ContainerStarted","Data":"a2c7c4a19968ec934eb2d82b6fd6c582a0e3ba562c22d8eb6cdb58faa6826649"} Feb 19 05:40:10 crc kubenswrapper[5012]: I0219 05:40:10.691640 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mqc2w" podStartSLOduration=3.327943211 podStartE2EDuration="26.691615172s" podCreationTimestamp="2026-02-19 05:39:44 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.487176064 +0000 UTC m=+881.520498633" lastFinishedPulling="2026-02-19 05:40:08.850847985 +0000 UTC m=+904.884170594" observedRunningTime="2026-02-19 05:40:09.67636679 +0000 UTC m=+905.709689389" watchObservedRunningTime="2026-02-19 05:40:10.691615172 +0000 UTC m=+906.724937781" Feb 19 05:40:11 crc kubenswrapper[5012]: I0219 05:40:11.668421 5012 generic.go:334] "Generic (PLEG): container finished" podID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerID="aafcf8e55096f9e884517934f0d856f7a5e8f3dbd56d88ebfc6b91bb108af0a5" exitCode=0 Feb 19 05:40:11 crc kubenswrapper[5012]: I0219 05:40:11.668562 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxhkh" event={"ID":"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4","Type":"ContainerDied","Data":"aafcf8e55096f9e884517934f0d856f7a5e8f3dbd56d88ebfc6b91bb108af0a5"} Feb 19 05:40:12 crc kubenswrapper[5012]: I0219 05:40:12.679852 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" event={"ID":"0cc1b41b-fbf6-4d0c-b721-dcad09c03feb","Type":"ContainerStarted","Data":"c9032249bd4ffac9263705b295d828d39771a43fa220b53d832929f0dea49e6c"} Feb 19 05:40:12 crc kubenswrapper[5012]: I0219 05:40:12.681450 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" Feb 19 05:40:12 crc kubenswrapper[5012]: I0219 05:40:12.684659 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxhkh" event={"ID":"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4","Type":"ContainerStarted","Data":"4793da2abb43c2d8c767bb759c94d3903c33395aff9cd6c898118590dbb050fc"} Feb 19 05:40:12 crc kubenswrapper[5012]: I0219 05:40:12.700034 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" podStartSLOduration=2.675679572 podStartE2EDuration="29.700014212s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.143815605 +0000 UTC m=+881.177138174" lastFinishedPulling="2026-02-19 05:40:12.168150215 +0000 UTC m=+908.201472814" observedRunningTime="2026-02-19 05:40:12.695454331 +0000 UTC m=+908.728776940" watchObservedRunningTime="2026-02-19 05:40:12.700014212 +0000 UTC m=+908.733336781" Feb 19 05:40:12 crc kubenswrapper[5012]: I0219 05:40:12.725804 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cxhkh" podStartSLOduration=3.304952483 podStartE2EDuration="4.725783519s" podCreationTimestamp="2026-02-19 05:40:08 +0000 UTC" firstStartedPulling="2026-02-19 05:40:10.658677601 +0000 UTC m=+906.692000210" lastFinishedPulling="2026-02-19 05:40:12.079508637 +0000 UTC m=+908.112831246" observedRunningTime="2026-02-19 05:40:12.724098828 +0000 UTC m=+908.757421407" watchObservedRunningTime="2026-02-19 05:40:12.725783519 +0000 UTC m=+908.759106098" Feb 19 05:40:13 crc kubenswrapper[5012]: I0219 05:40:13.713832 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-556xv" Feb 19 05:40:13 crc kubenswrapper[5012]: I0219 05:40:13.751627 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-kt4nw" Feb 19 05:40:13 crc kubenswrapper[5012]: I0219 05:40:13.804621 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-5szxp" Feb 19 05:40:13 crc kubenswrapper[5012]: I0219 05:40:13.880889 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-ldrx5" Feb 19 05:40:13 crc kubenswrapper[5012]: I0219 05:40:13.931252 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-rpbt8" Feb 19 05:40:13 crc kubenswrapper[5012]: I0219 05:40:13.944804 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-27hfc" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.079746 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-csct6" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.162006 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-pqrs7" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.182798 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-nlqtw" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.184733 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-25qtj" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.366786 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-z5r47" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.431093 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.431171 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.620031 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-pcpk8" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.714268 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" event={"ID":"8629b5e4-e6a8-4c47-b76b-f58a26b42912","Type":"ContainerStarted","Data":"c20300be31342ad71df790048cb886f5ba7c0f16593302cd60220954b6454876"} Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.714325 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" event={"ID":"49d66f3b-e451-4b73-bc6a-4b854a71a4d6","Type":"ContainerStarted","Data":"5a2840d29bbe3cb005c6b77ceb5334b2faea6eb91af8578859736ff997e6d97e"} Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.714340 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" event={"ID":"c55ed223-371b-409a-bcb6-8ca6d2a3c908","Type":"ContainerStarted","Data":"055a483e1e891fbc0be6f7d229ca39dac71dd6bf1c89764fcdcb42a2de49fa81"} Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.715151 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.715496 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.715782 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.789314 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" podStartSLOduration=2.778634018 podStartE2EDuration="31.789275688s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.114343478 +0000 UTC m=+881.147666047" lastFinishedPulling="2026-02-19 05:40:14.124985148 +0000 UTC m=+910.158307717" observedRunningTime="2026-02-19 05:40:14.781039118 +0000 UTC m=+910.814361697" watchObservedRunningTime="2026-02-19 05:40:14.789275688 +0000 UTC m=+910.822598257" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.800828 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" podStartSLOduration=2.993711694 podStartE2EDuration="31.800814269s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.468870268 +0000 UTC m=+881.502192837" lastFinishedPulling="2026-02-19 05:40:14.275972843 +0000 UTC m=+910.309295412" observedRunningTime="2026-02-19 05:40:14.799395385 +0000 UTC m=+910.832717954" watchObservedRunningTime="2026-02-19 05:40:14.800814269 +0000 UTC m=+910.834136838" Feb 19 05:40:14 crc kubenswrapper[5012]: I0219 05:40:14.830456 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" podStartSLOduration=3.013503745 podStartE2EDuration="31.83043572s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.352955896 +0000 UTC m=+881.386278465" lastFinishedPulling="2026-02-19 05:40:14.169887871 +0000 UTC m=+910.203210440" observedRunningTime="2026-02-19 05:40:14.824478665 +0000 UTC m=+910.857801234" watchObservedRunningTime="2026-02-19 05:40:14.83043572 +0000 UTC m=+910.863758289" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.551686 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.565038 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/996bfd61-486b-432d-9e09-d3a90ff9124c-cert\") pod \"infra-operator-controller-manager-79d975b745-cp8kx\" (UID: \"996bfd61-486b-432d-9e09-d3a90ff9124c\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.594222 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-llgkh" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.603375 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.725778 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" event={"ID":"dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43","Type":"ContainerStarted","Data":"aa7e878a41bf73aefcc53e99b65aa02626a2a4c40ad79a53aa40b9bbf411dc72"} Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.726871 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.750325 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" event={"ID":"8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be","Type":"ContainerStarted","Data":"49bf4dabe3e854716ac981e48b7349f157fd95eb991fd5c160d7fb183d62ffec"} Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.751028 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.758870 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" podStartSLOduration=2.664331566 podStartE2EDuration="32.75885415s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.067222011 +0000 UTC m=+881.100544580" lastFinishedPulling="2026-02-19 05:40:15.161744595 +0000 UTC m=+911.195067164" observedRunningTime="2026-02-19 05:40:15.754644098 +0000 UTC m=+911.787966667" watchObservedRunningTime="2026-02-19 05:40:15.75885415 +0000 UTC m=+911.792176719" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.782515 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" podStartSLOduration=2.367585193 podStartE2EDuration="32.782496246s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:44.706242554 +0000 UTC m=+880.739565123" lastFinishedPulling="2026-02-19 05:40:15.121153607 +0000 UTC m=+911.154476176" observedRunningTime="2026-02-19 05:40:15.777544675 +0000 UTC m=+911.810867254" watchObservedRunningTime="2026-02-19 05:40:15.782496246 +0000 UTC m=+911.815818815" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.857265 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.872134 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d6eb3922-90e6-4bb1-8caa-aac6b69c76b0-cert\") pod \"openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4\" (UID: \"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.953143 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-dfvzm" Feb 19 05:40:15 crc kubenswrapper[5012]: I0219 05:40:15.960773 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.060499 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.060575 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.063984 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-webhook-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.064648 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d1f124a8-4132-458d-a5a5-1839d31e7772-metrics-certs\") pod \"openstack-operator-controller-manager-69ff7bc449-tj54n\" (UID: \"d1f124a8-4132-458d-a5a5-1839d31e7772\") " pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.149785 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx"] Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.362274 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-d8sxf" Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.370425 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.463621 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4"] Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.667863 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n"] Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.758572 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" event={"ID":"d1f124a8-4132-458d-a5a5-1839d31e7772","Type":"ContainerStarted","Data":"4e27172a3ea7c2b38d5773114946ce08511db334df5da49a09d7658be6255515"} Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.760050 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" event={"ID":"457202a7-ae9f-4d06-8690-d220e532b305","Type":"ContainerStarted","Data":"278cbbe025cde94400df481393ab560ec00034782b9295958627ef650894e9e8"} Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.760248 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.761288 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" event={"ID":"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0","Type":"ContainerStarted","Data":"3ea2b185436885db83e92745816adf82ef54b157c12c24b39bb123293d2c8228"} Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.762141 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" event={"ID":"996bfd61-486b-432d-9e09-d3a90ff9124c","Type":"ContainerStarted","Data":"3d2b640d2d5bedc755ffda4d83a902a8ebf56ccf7d6c44f4ff14b786469ac48c"} Feb 19 05:40:16 crc kubenswrapper[5012]: I0219 05:40:16.777433 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" podStartSLOduration=3.011315282 podStartE2EDuration="33.777410824s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:39:45.37241 +0000 UTC m=+881.405732569" lastFinishedPulling="2026-02-19 05:40:16.138505522 +0000 UTC m=+912.171828111" observedRunningTime="2026-02-19 05:40:16.773188761 +0000 UTC m=+912.806511330" watchObservedRunningTime="2026-02-19 05:40:16.777410824 +0000 UTC m=+912.810733393" Feb 19 05:40:17 crc kubenswrapper[5012]: I0219 05:40:17.771410 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" event={"ID":"d1f124a8-4132-458d-a5a5-1839d31e7772","Type":"ContainerStarted","Data":"6c7b1312a1db5b69bd08ec2601f12660b0884fd2b593cdffa0a7c346e955ef18"} Feb 19 05:40:17 crc kubenswrapper[5012]: I0219 05:40:17.801017 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" podStartSLOduration=33.800988139 podStartE2EDuration="33.800988139s" podCreationTimestamp="2026-02-19 05:39:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:40:17.794104932 +0000 UTC m=+913.827427521" watchObservedRunningTime="2026-02-19 05:40:17.800988139 +0000 UTC m=+913.834310728" Feb 19 05:40:18 crc kubenswrapper[5012]: I0219 05:40:18.778272 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:40:19 crc kubenswrapper[5012]: I0219 05:40:19.179418 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:19 crc kubenswrapper[5012]: I0219 05:40:19.179503 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:19 crc kubenswrapper[5012]: I0219 05:40:19.245953 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:19 crc kubenswrapper[5012]: I0219 05:40:19.851482 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:19 crc kubenswrapper[5012]: I0219 05:40:19.905160 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cxhkh"] Feb 19 05:40:21 crc kubenswrapper[5012]: I0219 05:40:21.806682 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cxhkh" podUID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerName="registry-server" containerID="cri-o://4793da2abb43c2d8c767bb759c94d3903c33395aff9cd6c898118590dbb050fc" gracePeriod=2 Feb 19 05:40:22 crc kubenswrapper[5012]: I0219 05:40:22.820524 5012 generic.go:334] "Generic (PLEG): container finished" podID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerID="4793da2abb43c2d8c767bb759c94d3903c33395aff9cd6c898118590dbb050fc" exitCode=0 Feb 19 05:40:22 crc kubenswrapper[5012]: I0219 05:40:22.820598 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxhkh" event={"ID":"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4","Type":"ContainerDied","Data":"4793da2abb43c2d8c767bb759c94d3903c33395aff9cd6c898118590dbb050fc"} Feb 19 05:40:23 crc kubenswrapper[5012]: I0219 05:40:23.763142 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-qzq7x" Feb 19 05:40:23 crc kubenswrapper[5012]: I0219 05:40:23.855608 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-dgldv" Feb 19 05:40:23 crc kubenswrapper[5012]: I0219 05:40:23.878034 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-9zkvx" Feb 19 05:40:23 crc kubenswrapper[5012]: I0219 05:40:23.998705 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-xzk2n" Feb 19 05:40:24 crc kubenswrapper[5012]: I0219 05:40:24.116911 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l65c5" Feb 19 05:40:24 crc kubenswrapper[5012]: I0219 05:40:24.212875 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-6hfg4" Feb 19 05:40:24 crc kubenswrapper[5012]: I0219 05:40:24.260159 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-qjpw6" Feb 19 05:40:26 crc kubenswrapper[5012]: I0219 05:40:26.379425 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-69ff7bc449-tj54n" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.306393 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.444376 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-catalog-content\") pod \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.444493 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcmmp\" (UniqueName: \"kubernetes.io/projected/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-kube-api-access-pcmmp\") pod \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.444542 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-utilities\") pod \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\" (UID: \"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4\") " Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.445625 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-utilities" (OuterVolumeSpecName: "utilities") pod "de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" (UID: "de3ffb30-20cd-4e13-a51c-9d159b1ac3c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.451619 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-kube-api-access-pcmmp" (OuterVolumeSpecName: "kube-api-access-pcmmp") pod "de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" (UID: "de3ffb30-20cd-4e13-a51c-9d159b1ac3c4"). InnerVolumeSpecName "kube-api-access-pcmmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.546031 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcmmp\" (UniqueName: \"kubernetes.io/projected/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-kube-api-access-pcmmp\") on node \"crc\" DevicePath \"\"" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.546062 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.547903 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" (UID: "de3ffb30-20cd-4e13-a51c-9d159b1ac3c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.647114 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.880024 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" event={"ID":"d6eb3922-90e6-4bb1-8caa-aac6b69c76b0","Type":"ContainerStarted","Data":"2cf7deccf3a6b020bae25dc205a1d46a1f232e81416d627b0ea595a0d080ee7c"} Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.880292 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.883690 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" event={"ID":"996bfd61-486b-432d-9e09-d3a90ff9124c","Type":"ContainerStarted","Data":"df4d02a809f04b1b9c7f9c2725bcd62797273e173a717aabd4998912df93e448"} Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.883900 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.900182 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxhkh" event={"ID":"de3ffb30-20cd-4e13-a51c-9d159b1ac3c4","Type":"ContainerDied","Data":"a2c7c4a19968ec934eb2d82b6fd6c582a0e3ba562c22d8eb6cdb58faa6826649"} Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.900263 5012 scope.go:117] "RemoveContainer" containerID="4793da2abb43c2d8c767bb759c94d3903c33395aff9cd6c898118590dbb050fc" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.901168 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cxhkh" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.928102 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" podStartSLOduration=34.097059755 podStartE2EDuration="44.928084194s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:40:16.476224663 +0000 UTC m=+912.509547272" lastFinishedPulling="2026-02-19 05:40:27.307249142 +0000 UTC m=+923.340571711" observedRunningTime="2026-02-19 05:40:27.919857214 +0000 UTC m=+923.953179823" watchObservedRunningTime="2026-02-19 05:40:27.928084194 +0000 UTC m=+923.961406763" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.937871 5012 scope.go:117] "RemoveContainer" containerID="aafcf8e55096f9e884517934f0d856f7a5e8f3dbd56d88ebfc6b91bb108af0a5" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.966932 5012 scope.go:117] "RemoveContainer" containerID="63155282e6f7bbbff3f0ae2f5fef6845f4cb91c8ffd7d7fe82a1ae65c034544e" Feb 19 05:40:27 crc kubenswrapper[5012]: I0219 05:40:27.967203 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" podStartSLOduration=33.819052108 podStartE2EDuration="44.967176456s" podCreationTimestamp="2026-02-19 05:39:43 +0000 UTC" firstStartedPulling="2026-02-19 05:40:16.174944319 +0000 UTC m=+912.208266888" lastFinishedPulling="2026-02-19 05:40:27.323068667 +0000 UTC m=+923.356391236" observedRunningTime="2026-02-19 05:40:27.967097554 +0000 UTC m=+924.000420133" watchObservedRunningTime="2026-02-19 05:40:27.967176456 +0000 UTC m=+924.000499055" Feb 19 05:40:28 crc kubenswrapper[5012]: I0219 05:40:28.001094 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cxhkh"] Feb 19 05:40:28 crc kubenswrapper[5012]: I0219 05:40:28.006576 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cxhkh"] Feb 19 05:40:28 crc kubenswrapper[5012]: I0219 05:40:28.711904 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" path="/var/lib/kubelet/pods/de3ffb30-20cd-4e13-a51c-9d159b1ac3c4/volumes" Feb 19 05:40:35 crc kubenswrapper[5012]: I0219 05:40:35.610275 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cp8kx" Feb 19 05:40:35 crc kubenswrapper[5012]: I0219 05:40:35.971523 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4" Feb 19 05:40:44 crc kubenswrapper[5012]: I0219 05:40:44.431018 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:40:44 crc kubenswrapper[5012]: I0219 05:40:44.431727 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:40:44 crc kubenswrapper[5012]: I0219 05:40:44.431810 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:40:44 crc kubenswrapper[5012]: I0219 05:40:44.432762 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f6b4f2485162f8c24d6693d845318234656e6a8c97d49d2e72f4427654fa319a"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 05:40:44 crc kubenswrapper[5012]: I0219 05:40:44.432884 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://f6b4f2485162f8c24d6693d845318234656e6a8c97d49d2e72f4427654fa319a" gracePeriod=600 Feb 19 05:40:45 crc kubenswrapper[5012]: I0219 05:40:45.067721 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="f6b4f2485162f8c24d6693d845318234656e6a8c97d49d2e72f4427654fa319a" exitCode=0 Feb 19 05:40:45 crc kubenswrapper[5012]: I0219 05:40:45.067875 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"f6b4f2485162f8c24d6693d845318234656e6a8c97d49d2e72f4427654fa319a"} Feb 19 05:40:45 crc kubenswrapper[5012]: I0219 05:40:45.068122 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"0209690f43a6b6283a91e933f5b897e5259f5fced0261c8b5238e804ce206915"} Feb 19 05:40:45 crc kubenswrapper[5012]: I0219 05:40:45.068157 5012 scope.go:117] "RemoveContainer" containerID="2fa30f17f6fec33303fdb3b3cb4c275384acd11d008a1c182ee7a051d5288089" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.696188 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-vwkhm"] Feb 19 05:40:55 crc kubenswrapper[5012]: E0219 05:40:55.697925 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerName="extract-content" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.697998 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerName="extract-content" Feb 19 05:40:55 crc kubenswrapper[5012]: E0219 05:40:55.698053 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerName="registry-server" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.698112 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerName="registry-server" Feb 19 05:40:55 crc kubenswrapper[5012]: E0219 05:40:55.698178 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerName="extract-utilities" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.698232 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerName="extract-utilities" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.698448 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="de3ffb30-20cd-4e13-a51c-9d159b1ac3c4" containerName="registry-server" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.699253 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.703236 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.703735 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.703988 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.704212 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-2v55g" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.718615 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-vwkhm"] Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.720655 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c96faf-42fc-437a-894d-e1c7f75b3511-config\") pod \"dnsmasq-dns-8468885bfc-vwkhm\" (UID: \"b2c96faf-42fc-437a-894d-e1c7f75b3511\") " pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.720723 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gshm7\" (UniqueName: \"kubernetes.io/projected/b2c96faf-42fc-437a-894d-e1c7f75b3511-kube-api-access-gshm7\") pod \"dnsmasq-dns-8468885bfc-vwkhm\" (UID: \"b2c96faf-42fc-437a-894d-e1c7f75b3511\") " pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.750942 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-td7mg"] Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.752034 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.753616 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.771382 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-td7mg"] Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.821495 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c96faf-42fc-437a-894d-e1c7f75b3511-config\") pod \"dnsmasq-dns-8468885bfc-vwkhm\" (UID: \"b2c96faf-42fc-437a-894d-e1c7f75b3511\") " pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.821883 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gshm7\" (UniqueName: \"kubernetes.io/projected/b2c96faf-42fc-437a-894d-e1c7f75b3511-kube-api-access-gshm7\") pod \"dnsmasq-dns-8468885bfc-vwkhm\" (UID: \"b2c96faf-42fc-437a-894d-e1c7f75b3511\") " pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.822322 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c96faf-42fc-437a-894d-e1c7f75b3511-config\") pod \"dnsmasq-dns-8468885bfc-vwkhm\" (UID: \"b2c96faf-42fc-437a-894d-e1c7f75b3511\") " pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.859481 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gshm7\" (UniqueName: \"kubernetes.io/projected/b2c96faf-42fc-437a-894d-e1c7f75b3511-kube-api-access-gshm7\") pod \"dnsmasq-dns-8468885bfc-vwkhm\" (UID: \"b2c96faf-42fc-437a-894d-e1c7f75b3511\") " pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.923346 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-dns-svc\") pod \"dnsmasq-dns-545d49fd5c-td7mg\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.923456 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-config\") pod \"dnsmasq-dns-545d49fd5c-td7mg\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:55 crc kubenswrapper[5012]: I0219 05:40:55.923484 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkzpw\" (UniqueName: \"kubernetes.io/projected/862b02ed-ae65-4348-8a31-81c1aff80089-kube-api-access-kkzpw\") pod \"dnsmasq-dns-545d49fd5c-td7mg\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:56 crc kubenswrapper[5012]: I0219 05:40:56.018004 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" Feb 19 05:40:56 crc kubenswrapper[5012]: I0219 05:40:56.024541 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-config\") pod \"dnsmasq-dns-545d49fd5c-td7mg\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:56 crc kubenswrapper[5012]: I0219 05:40:56.024582 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkzpw\" (UniqueName: \"kubernetes.io/projected/862b02ed-ae65-4348-8a31-81c1aff80089-kube-api-access-kkzpw\") pod \"dnsmasq-dns-545d49fd5c-td7mg\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:56 crc kubenswrapper[5012]: I0219 05:40:56.024628 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-dns-svc\") pod \"dnsmasq-dns-545d49fd5c-td7mg\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:56 crc kubenswrapper[5012]: I0219 05:40:56.025369 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-dns-svc\") pod \"dnsmasq-dns-545d49fd5c-td7mg\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:56 crc kubenswrapper[5012]: I0219 05:40:56.025876 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-config\") pod \"dnsmasq-dns-545d49fd5c-td7mg\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:56 crc kubenswrapper[5012]: I0219 05:40:56.043513 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkzpw\" (UniqueName: \"kubernetes.io/projected/862b02ed-ae65-4348-8a31-81c1aff80089-kube-api-access-kkzpw\") pod \"dnsmasq-dns-545d49fd5c-td7mg\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:56 crc kubenswrapper[5012]: I0219 05:40:56.070171 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:40:56 crc kubenswrapper[5012]: I0219 05:40:56.327188 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-td7mg"] Feb 19 05:40:56 crc kubenswrapper[5012]: I0219 05:40:56.425419 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-vwkhm"] Feb 19 05:40:56 crc kubenswrapper[5012]: W0219 05:40:56.426656 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2c96faf_42fc_437a_894d_e1c7f75b3511.slice/crio-39eed3d52a31374f301920c96915c3a090730af8bc3bc407b5e57355ea12607b WatchSource:0}: Error finding container 39eed3d52a31374f301920c96915c3a090730af8bc3bc407b5e57355ea12607b: Status 404 returned error can't find the container with id 39eed3d52a31374f301920c96915c3a090730af8bc3bc407b5e57355ea12607b Feb 19 05:40:57 crc kubenswrapper[5012]: I0219 05:40:57.181504 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" event={"ID":"862b02ed-ae65-4348-8a31-81c1aff80089","Type":"ContainerStarted","Data":"4e46c1414bde8bffdfc3f7f7ffe96ab966534855c2a4f448c23174850398a8f0"} Feb 19 05:40:57 crc kubenswrapper[5012]: I0219 05:40:57.183499 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" event={"ID":"b2c96faf-42fc-437a-894d-e1c7f75b3511","Type":"ContainerStarted","Data":"39eed3d52a31374f301920c96915c3a090730af8bc3bc407b5e57355ea12607b"} Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.217554 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-td7mg"] Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.235519 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59ddbc48b7-4t5tr"] Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.237139 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.240561 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59ddbc48b7-4t5tr"] Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.283252 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gwbv\" (UniqueName: \"kubernetes.io/projected/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-kube-api-access-5gwbv\") pod \"dnsmasq-dns-59ddbc48b7-4t5tr\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.283531 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-dns-svc\") pod \"dnsmasq-dns-59ddbc48b7-4t5tr\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.283572 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-config\") pod \"dnsmasq-dns-59ddbc48b7-4t5tr\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.384739 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-config\") pod \"dnsmasq-dns-59ddbc48b7-4t5tr\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.384841 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gwbv\" (UniqueName: \"kubernetes.io/projected/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-kube-api-access-5gwbv\") pod \"dnsmasq-dns-59ddbc48b7-4t5tr\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.384880 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-dns-svc\") pod \"dnsmasq-dns-59ddbc48b7-4t5tr\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.385718 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-dns-svc\") pod \"dnsmasq-dns-59ddbc48b7-4t5tr\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.386241 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-config\") pod \"dnsmasq-dns-59ddbc48b7-4t5tr\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.407550 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gwbv\" (UniqueName: \"kubernetes.io/projected/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-kube-api-access-5gwbv\") pod \"dnsmasq-dns-59ddbc48b7-4t5tr\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.497116 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-vwkhm"] Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.525698 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65886c9755-l2845"] Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.526803 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.537460 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65886c9755-l2845"] Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.564328 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.587000 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvbph\" (UniqueName: \"kubernetes.io/projected/57e2c914-87bd-46f8-92c7-e87437f6758a-kube-api-access-tvbph\") pod \"dnsmasq-dns-65886c9755-l2845\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.587057 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-dns-svc\") pod \"dnsmasq-dns-65886c9755-l2845\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.587151 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-config\") pod \"dnsmasq-dns-65886c9755-l2845\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.688287 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-dns-svc\") pod \"dnsmasq-dns-65886c9755-l2845\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.688417 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-config\") pod \"dnsmasq-dns-65886c9755-l2845\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.688442 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvbph\" (UniqueName: \"kubernetes.io/projected/57e2c914-87bd-46f8-92c7-e87437f6758a-kube-api-access-tvbph\") pod \"dnsmasq-dns-65886c9755-l2845\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.689149 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-dns-svc\") pod \"dnsmasq-dns-65886c9755-l2845\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.689393 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-config\") pod \"dnsmasq-dns-65886c9755-l2845\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.718532 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvbph\" (UniqueName: \"kubernetes.io/projected/57e2c914-87bd-46f8-92c7-e87437f6758a-kube-api-access-tvbph\") pod \"dnsmasq-dns-65886c9755-l2845\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.751916 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59ddbc48b7-4t5tr"] Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.790865 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f7d487d45-bvz4n"] Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.792036 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.812389 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f7d487d45-bvz4n"] Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.859511 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.892382 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-config\") pod \"dnsmasq-dns-7f7d487d45-bvz4n\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.892885 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-dns-svc\") pod \"dnsmasq-dns-7f7d487d45-bvz4n\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.893098 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcg6n\" (UniqueName: \"kubernetes.io/projected/79e01828-7818-4fe8-bd3f-8d39e9bf939c-kube-api-access-mcg6n\") pod \"dnsmasq-dns-7f7d487d45-bvz4n\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.994293 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-config\") pod \"dnsmasq-dns-7f7d487d45-bvz4n\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.994368 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-dns-svc\") pod \"dnsmasq-dns-7f7d487d45-bvz4n\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.994416 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcg6n\" (UniqueName: \"kubernetes.io/projected/79e01828-7818-4fe8-bd3f-8d39e9bf939c-kube-api-access-mcg6n\") pod \"dnsmasq-dns-7f7d487d45-bvz4n\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.995515 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-dns-svc\") pod \"dnsmasq-dns-7f7d487d45-bvz4n\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:40:59 crc kubenswrapper[5012]: I0219 05:40:59.995521 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-config\") pod \"dnsmasq-dns-7f7d487d45-bvz4n\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.011435 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcg6n\" (UniqueName: \"kubernetes.io/projected/79e01828-7818-4fe8-bd3f-8d39e9bf939c-kube-api-access-mcg6n\") pod \"dnsmasq-dns-7f7d487d45-bvz4n\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.130969 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.257760 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59ddbc48b7-4t5tr"] Feb 19 05:41:00 crc kubenswrapper[5012]: W0219 05:41:00.271559 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a48a7cc_f140_4802_8dd4_2f4bb1c62aed.slice/crio-100f9d605f088bd90ac2e34324dc80fd0839075a52b9732932e0a541bd1a7b13 WatchSource:0}: Error finding container 100f9d605f088bd90ac2e34324dc80fd0839075a52b9732932e0a541bd1a7b13: Status 404 returned error can't find the container with id 100f9d605f088bd90ac2e34324dc80fd0839075a52b9732932e0a541bd1a7b13 Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.357496 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.368953 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.371119 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-default-user" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.374484 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-erlang-cookie" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.374660 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-config-data" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.374966 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-plugins-conf" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.375215 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-server-conf" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.375477 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-notifications-rabbitmq-svc" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.375573 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-server-dockercfg-wg2nx" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.379191 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402026 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c628866-f96d-4e7b-8846-7073c98dd389-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402070 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402104 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c628866-f96d-4e7b-8846-7073c98dd389-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402122 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402145 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402161 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c628866-f96d-4e7b-8846-7073c98dd389-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402187 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402202 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c628866-f96d-4e7b-8846-7073c98dd389-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402224 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c628866-f96d-4e7b-8846-7073c98dd389-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402251 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.402269 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpfwf\" (UniqueName: \"kubernetes.io/projected/3c628866-f96d-4e7b-8846-7073c98dd389-kube-api-access-qpfwf\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.408912 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65886c9755-l2845"] Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503121 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c628866-f96d-4e7b-8846-7073c98dd389-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503562 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503597 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c628866-f96d-4e7b-8846-7073c98dd389-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503616 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503638 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503655 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c628866-f96d-4e7b-8846-7073c98dd389-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503676 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503693 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c628866-f96d-4e7b-8846-7073c98dd389-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503714 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c628866-f96d-4e7b-8846-7073c98dd389-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503743 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.503761 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpfwf\" (UniqueName: \"kubernetes.io/projected/3c628866-f96d-4e7b-8846-7073c98dd389-kube-api-access-qpfwf\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.504652 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c628866-f96d-4e7b-8846-7073c98dd389-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.504764 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.504765 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.505089 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3c628866-f96d-4e7b-8846-7073c98dd389-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.505242 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.505764 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3c628866-f96d-4e7b-8846-7073c98dd389-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.507970 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3c628866-f96d-4e7b-8846-7073c98dd389-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.527633 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.529061 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3c628866-f96d-4e7b-8846-7073c98dd389-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.530474 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpfwf\" (UniqueName: \"kubernetes.io/projected/3c628866-f96d-4e7b-8846-7073c98dd389-kube-api-access-qpfwf\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.531584 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3c628866-f96d-4e7b-8846-7073c98dd389-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.554034 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"3c628866-f96d-4e7b-8846-7073c98dd389\") " pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.602233 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f7d487d45-bvz4n"] Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.655425 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.656836 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.662526 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.662666 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.662798 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.663391 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.663528 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.663544 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.663697 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-s7g27" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.664755 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.704464 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.832695 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-config-data\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.832783 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.832820 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.833026 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0095712-262e-4562-afac-0f2f4372224d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.833124 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.833239 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.833826 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.833915 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0095712-262e-4562-afac-0f2f4372224d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.833963 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8phq\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-kube-api-access-b8phq\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.834014 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.834121 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.907262 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.909845 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.921897 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.932406 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.932659 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.933253 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.933437 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.933617 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.934381 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hd6wk" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.936898 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.936935 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.936971 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.936996 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0095712-262e-4562-afac-0f2f4372224d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.937016 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8phq\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-kube-api-access-b8phq\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.937041 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.937063 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.937094 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-config-data\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.937114 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.937138 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.937159 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0095712-262e-4562-afac-0f2f4372224d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.938563 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.940073 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-config-data\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.940597 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.941155 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.942018 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.942638 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.946986 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0095712-262e-4562-afac-0f2f4372224d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.948672 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.948958 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0095712-262e-4562-afac-0f2f4372224d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.949272 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.952628 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.963696 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8phq\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-kube-api-access-b8phq\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.979838 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " pod="openstack/rabbitmq-server-0" Feb 19 05:41:00 crc kubenswrapper[5012]: I0219 05:41:00.987667 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040133 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040277 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a13d3004-2045-4daf-a925-7eccf541b1b4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040435 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040472 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040584 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a13d3004-2045-4daf-a925-7eccf541b1b4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040658 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040690 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040752 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040790 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zch8n\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-kube-api-access-zch8n\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040864 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.040910 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.143694 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.143753 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.143813 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a13d3004-2045-4daf-a925-7eccf541b1b4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.143852 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.143880 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.143935 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.143972 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zch8n\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-kube-api-access-zch8n\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.144023 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.144059 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.144092 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.144110 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a13d3004-2045-4daf-a925-7eccf541b1b4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.145183 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.145910 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.145994 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.146197 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.146565 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.147065 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.149982 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a13d3004-2045-4daf-a925-7eccf541b1b4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.150242 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.152487 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.165223 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zch8n\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-kube-api-access-zch8n\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.174435 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a13d3004-2045-4daf-a925-7eccf541b1b4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.182059 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.193670 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.242256 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.276345 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65886c9755-l2845" event={"ID":"57e2c914-87bd-46f8-92c7-e87437f6758a","Type":"ContainerStarted","Data":"2ef7e60f2849b48568b2db26b7cbdccc8e5409326bef7865ef80fbf90f513b0e"} Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.279790 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"3c628866-f96d-4e7b-8846-7073c98dd389","Type":"ContainerStarted","Data":"ae645339d04f191bc4c70c73035c85e9ed7afd942642f9724125802b95690a47"} Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.284569 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" event={"ID":"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed","Type":"ContainerStarted","Data":"100f9d605f088bd90ac2e34324dc80fd0839075a52b9732932e0a541bd1a7b13"} Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.298070 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" event={"ID":"79e01828-7818-4fe8-bd3f-8d39e9bf939c","Type":"ContainerStarted","Data":"4cb41e822d4dbb1861f13461a8bcb5e410e5b409d268141b4a6e8e97a369da40"} Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.437410 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 05:41:01 crc kubenswrapper[5012]: W0219 05:41:01.448880 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0095712_262e_4562_afac_0f2f4372224d.slice/crio-a9ce4884d01424dd045dfa7d8118a6965b35bc2fd9ba564b1b28a67e56e88f01 WatchSource:0}: Error finding container a9ce4884d01424dd045dfa7d8118a6965b35bc2fd9ba564b1b28a67e56e88f01: Status 404 returned error can't find the container with id a9ce4884d01424dd045dfa7d8118a6965b35bc2fd9ba564b1b28a67e56e88f01 Feb 19 05:41:01 crc kubenswrapper[5012]: I0219 05:41:01.826061 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.317883 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.320842 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0095712-262e-4562-afac-0f2f4372224d","Type":"ContainerStarted","Data":"a9ce4884d01424dd045dfa7d8118a6965b35bc2fd9ba564b1b28a67e56e88f01"} Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.321094 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.322914 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a13d3004-2045-4daf-a925-7eccf541b1b4","Type":"ContainerStarted","Data":"856efb676cb6080920d1573427ad1823ab21a0fe78f76dfb2cca62d969151964"} Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.324863 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.325354 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.325400 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-rjhwx" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.325595 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.330617 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.354041 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.470544 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1fd0c672-e258-4feb-8bbd-26135f92f7fb-kolla-config\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.470833 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd0c672-e258-4feb-8bbd-26135f92f7fb-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.470867 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd0c672-e258-4feb-8bbd-26135f92f7fb-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.470914 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd0c672-e258-4feb-8bbd-26135f92f7fb-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.470956 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6w2k\" (UniqueName: \"kubernetes.io/projected/1fd0c672-e258-4feb-8bbd-26135f92f7fb-kube-api-access-l6w2k\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.471008 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.471038 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1fd0c672-e258-4feb-8bbd-26135f92f7fb-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.471469 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1fd0c672-e258-4feb-8bbd-26135f92f7fb-config-data-default\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.572214 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd0c672-e258-4feb-8bbd-26135f92f7fb-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.572257 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6w2k\" (UniqueName: \"kubernetes.io/projected/1fd0c672-e258-4feb-8bbd-26135f92f7fb-kube-api-access-l6w2k\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.572288 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.572336 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1fd0c672-e258-4feb-8bbd-26135f92f7fb-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.572373 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1fd0c672-e258-4feb-8bbd-26135f92f7fb-config-data-default\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.572412 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1fd0c672-e258-4feb-8bbd-26135f92f7fb-kolla-config\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.572427 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd0c672-e258-4feb-8bbd-26135f92f7fb-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.572452 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd0c672-e258-4feb-8bbd-26135f92f7fb-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.572984 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.573234 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1fd0c672-e258-4feb-8bbd-26135f92f7fb-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.573837 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1fd0c672-e258-4feb-8bbd-26135f92f7fb-kolla-config\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.575149 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd0c672-e258-4feb-8bbd-26135f92f7fb-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.578253 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1fd0c672-e258-4feb-8bbd-26135f92f7fb-config-data-default\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.586988 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fd0c672-e258-4feb-8bbd-26135f92f7fb-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.591765 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6w2k\" (UniqueName: \"kubernetes.io/projected/1fd0c672-e258-4feb-8bbd-26135f92f7fb-kube-api-access-l6w2k\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.603276 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.603337 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fd0c672-e258-4feb-8bbd-26135f92f7fb-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1fd0c672-e258-4feb-8bbd-26135f92f7fb\") " pod="openstack/openstack-galera-0" Feb 19 05:41:02 crc kubenswrapper[5012]: I0219 05:41:02.659110 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.299619 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 05:41:03 crc kubenswrapper[5012]: W0219 05:41:03.327725 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fd0c672_e258_4feb_8bbd_26135f92f7fb.slice/crio-47f1110042288f6f94d44f934829dc5fa532b8dd81bbb097f27affb84677eafe WatchSource:0}: Error finding container 47f1110042288f6f94d44f934829dc5fa532b8dd81bbb097f27affb84677eafe: Status 404 returned error can't find the container with id 47f1110042288f6f94d44f934829dc5fa532b8dd81bbb097f27affb84677eafe Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.739906 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.742994 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.745622 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-pxjvs" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.749709 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.750141 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.751808 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.752814 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.921696 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/04466d10-2177-4361-bd86-333c046b9e52-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.921752 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/04466d10-2177-4361-bd86-333c046b9e52-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.921780 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04466d10-2177-4361-bd86-333c046b9e52-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.921820 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04466d10-2177-4361-bd86-333c046b9e52-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.921872 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.921902 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/04466d10-2177-4361-bd86-333c046b9e52-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.921924 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwmq8\" (UniqueName: \"kubernetes.io/projected/04466d10-2177-4361-bd86-333c046b9e52-kube-api-access-dwmq8\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:03 crc kubenswrapper[5012]: I0219 05:41:03.921956 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/04466d10-2177-4361-bd86-333c046b9e52-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.023717 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.023792 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/04466d10-2177-4361-bd86-333c046b9e52-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.023827 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwmq8\" (UniqueName: \"kubernetes.io/projected/04466d10-2177-4361-bd86-333c046b9e52-kube-api-access-dwmq8\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.023867 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/04466d10-2177-4361-bd86-333c046b9e52-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.023919 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/04466d10-2177-4361-bd86-333c046b9e52-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.023946 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/04466d10-2177-4361-bd86-333c046b9e52-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.023976 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04466d10-2177-4361-bd86-333c046b9e52-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.024020 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04466d10-2177-4361-bd86-333c046b9e52-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.024357 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/04466d10-2177-4361-bd86-333c046b9e52-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.024464 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.025739 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/04466d10-2177-4361-bd86-333c046b9e52-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.025883 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/04466d10-2177-4361-bd86-333c046b9e52-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.026743 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04466d10-2177-4361-bd86-333c046b9e52-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.046101 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04466d10-2177-4361-bd86-333c046b9e52-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.048515 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/04466d10-2177-4361-bd86-333c046b9e52-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.051173 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwmq8\" (UniqueName: \"kubernetes.io/projected/04466d10-2177-4361-bd86-333c046b9e52-kube-api-access-dwmq8\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.107129 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"04466d10-2177-4361-bd86-333c046b9e52\") " pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.283889 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.284805 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.288057 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.288777 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-lm7dt" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.288792 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.319658 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.384701 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1fd0c672-e258-4feb-8bbd-26135f92f7fb","Type":"ContainerStarted","Data":"47f1110042288f6f94d44f934829dc5fa532b8dd81bbb097f27affb84677eafe"} Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.399283 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.430749 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-combined-ca-bundle\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.430970 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-config-data\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.431059 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jllr6\" (UniqueName: \"kubernetes.io/projected/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-kube-api-access-jllr6\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.431083 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-memcached-tls-certs\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.431143 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-kolla-config\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.532376 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jllr6\" (UniqueName: \"kubernetes.io/projected/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-kube-api-access-jllr6\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.532429 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-memcached-tls-certs\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.532467 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-kolla-config\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.532567 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-combined-ca-bundle\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.532587 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-config-data\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.533436 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-config-data\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.534634 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-kolla-config\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.539632 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-combined-ca-bundle\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.550688 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-memcached-tls-certs\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.551099 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jllr6\" (UniqueName: \"kubernetes.io/projected/38a4a51f-c380-48fc-8f0e-cdd1ea09fa53-kube-api-access-jllr6\") pod \"memcached-0\" (UID: \"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53\") " pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.617723 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.962110 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 05:41:04 crc kubenswrapper[5012]: W0219 05:41:04.988664 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38a4a51f_c380_48fc_8f0e_cdd1ea09fa53.slice/crio-74ed3816d14374b8abf3c72c194cbbbe488bdd77110f3429b53676631c9f5fa3 WatchSource:0}: Error finding container 74ed3816d14374b8abf3c72c194cbbbe488bdd77110f3429b53676631c9f5fa3: Status 404 returned error can't find the container with id 74ed3816d14374b8abf3c72c194cbbbe488bdd77110f3429b53676631c9f5fa3 Feb 19 05:41:04 crc kubenswrapper[5012]: I0219 05:41:04.997202 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 05:41:05 crc kubenswrapper[5012]: I0219 05:41:05.399224 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"04466d10-2177-4361-bd86-333c046b9e52","Type":"ContainerStarted","Data":"4b830be35220905d2aad786468bb9e6dc43157df5e48cc3e6079dcc819616218"} Feb 19 05:41:05 crc kubenswrapper[5012]: I0219 05:41:05.401538 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53","Type":"ContainerStarted","Data":"74ed3816d14374b8abf3c72c194cbbbe488bdd77110f3429b53676631c9f5fa3"} Feb 19 05:41:06 crc kubenswrapper[5012]: I0219 05:41:06.549690 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 05:41:06 crc kubenswrapper[5012]: I0219 05:41:06.551523 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 05:41:06 crc kubenswrapper[5012]: I0219 05:41:06.557516 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 05:41:06 crc kubenswrapper[5012]: I0219 05:41:06.559840 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-sv9x8" Feb 19 05:41:06 crc kubenswrapper[5012]: I0219 05:41:06.674374 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcbl4\" (UniqueName: \"kubernetes.io/projected/6c04ef21-3d68-44e8-ba69-164f3b32b7a0-kube-api-access-jcbl4\") pod \"kube-state-metrics-0\" (UID: \"6c04ef21-3d68-44e8-ba69-164f3b32b7a0\") " pod="openstack/kube-state-metrics-0" Feb 19 05:41:06 crc kubenswrapper[5012]: I0219 05:41:06.775775 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcbl4\" (UniqueName: \"kubernetes.io/projected/6c04ef21-3d68-44e8-ba69-164f3b32b7a0-kube-api-access-jcbl4\") pod \"kube-state-metrics-0\" (UID: \"6c04ef21-3d68-44e8-ba69-164f3b32b7a0\") " pod="openstack/kube-state-metrics-0" Feb 19 05:41:06 crc kubenswrapper[5012]: I0219 05:41:06.815023 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcbl4\" (UniqueName: \"kubernetes.io/projected/6c04ef21-3d68-44e8-ba69-164f3b32b7a0-kube-api-access-jcbl4\") pod \"kube-state-metrics-0\" (UID: \"6c04ef21-3d68-44e8-ba69-164f3b32b7a0\") " pod="openstack/kube-state-metrics-0" Feb 19 05:41:06 crc kubenswrapper[5012]: I0219 05:41:06.875124 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.467604 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 05:41:07 crc kubenswrapper[5012]: W0219 05:41:07.496950 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c04ef21_3d68_44e8_ba69_164f3b32b7a0.slice/crio-1d78bdb8cd099c1e00c91080ebe4740fa66e2f4e7fc08f7ed987fe609d80ac23 WatchSource:0}: Error finding container 1d78bdb8cd099c1e00c91080ebe4740fa66e2f4e7fc08f7ed987fe609d80ac23: Status 404 returned error can't find the container with id 1d78bdb8cd099c1e00c91080ebe4740fa66e2f4e7fc08f7ed987fe609d80ac23 Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.638420 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.640438 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.643800 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.647482 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.648842 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.650760 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7bqtw" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.650932 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.651057 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.651189 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.651330 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.669611 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.800103 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.800160 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.800195 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.800220 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.800236 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.800262 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e31edbd-c20b-420d-8888-cafb392410cd-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.800282 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.800544 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-config\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.800706 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s694\" (UniqueName: \"kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-kube-api-access-7s694\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.800805 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.902331 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e31edbd-c20b-420d-8888-cafb392410cd-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.902382 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.902432 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-config\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.902476 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s694\" (UniqueName: \"kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-kube-api-access-7s694\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.902498 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.902530 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.902568 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.902603 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.902627 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.902643 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.904211 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.904241 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.904553 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.906475 5012 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.906506 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/80266977aa18e8991458f1f7d5520b709fb21586520e915bbacb4bc2380e455f/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.909512 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.909575 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-config\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.910083 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.911622 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e31edbd-c20b-420d-8888-cafb392410cd-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.926231 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s694\" (UniqueName: \"kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-kube-api-access-7s694\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.933488 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.939080 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:07 crc kubenswrapper[5012]: I0219 05:41:07.976914 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 05:41:08 crc kubenswrapper[5012]: I0219 05:41:08.456984 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 05:41:08 crc kubenswrapper[5012]: I0219 05:41:08.586656 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6c04ef21-3d68-44e8-ba69-164f3b32b7a0","Type":"ContainerStarted","Data":"1d78bdb8cd099c1e00c91080ebe4740fa66e2f4e7fc08f7ed987fe609d80ac23"} Feb 19 05:41:08 crc kubenswrapper[5012]: W0219 05:41:08.618179 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e31edbd_c20b_420d_8888_cafb392410cd.slice/crio-91314d71567782400d0673184328bab50c18185869b638d4949c49d81c11f6bb WatchSource:0}: Error finding container 91314d71567782400d0673184328bab50c18185869b638d4949c49d81c11f6bb: Status 404 returned error can't find the container with id 91314d71567782400d0673184328bab50c18185869b638d4949c49d81c11f6bb Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.552681 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cr94m"] Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.553982 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.568732 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.568991 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-pmbzw" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.569006 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-7qdpg"] Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.569274 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.574440 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.587535 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cr94m"] Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.587583 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7qdpg"] Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.645289 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16fbaba1-bd32-4121-8743-99422db74180-scripts\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.645424 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-etc-ovs\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.645452 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-var-run\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.645498 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-combined-ca-bundle\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.646083 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-var-log-ovn\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.646119 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-var-lib\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.646164 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2crw\" (UniqueName: \"kubernetes.io/projected/16fbaba1-bd32-4121-8743-99422db74180-kube-api-access-f2crw\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.646186 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rjj8\" (UniqueName: \"kubernetes.io/projected/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-kube-api-access-5rjj8\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.646202 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-var-run-ovn\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.646251 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-var-log\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.646575 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-scripts\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.646615 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-var-run\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.646635 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-ovn-controller-tls-certs\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.668716 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e31edbd-c20b-420d-8888-cafb392410cd","Type":"ContainerStarted","Data":"91314d71567782400d0673184328bab50c18185869b638d4949c49d81c11f6bb"} Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749130 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2crw\" (UniqueName: \"kubernetes.io/projected/16fbaba1-bd32-4121-8743-99422db74180-kube-api-access-f2crw\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749184 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rjj8\" (UniqueName: \"kubernetes.io/projected/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-kube-api-access-5rjj8\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749202 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-var-run-ovn\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749239 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-var-log\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749267 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-scripts\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749295 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-var-run\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749331 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-ovn-controller-tls-certs\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749359 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16fbaba1-bd32-4121-8743-99422db74180-scripts\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749416 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-etc-ovs\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749435 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-var-run\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749452 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-combined-ca-bundle\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749470 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-var-log-ovn\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.749490 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-var-lib\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.750054 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-var-lib\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.750653 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-var-run-ovn\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.750767 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-var-log\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.751463 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-etc-ovs\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.751505 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-var-log-ovn\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.751656 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/16fbaba1-bd32-4121-8743-99422db74180-var-run\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.751698 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-var-run\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.756050 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16fbaba1-bd32-4121-8743-99422db74180-scripts\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.758801 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-ovn-controller-tls-certs\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.759701 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-scripts\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.781968 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rjj8\" (UniqueName: \"kubernetes.io/projected/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-kube-api-access-5rjj8\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.784258 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2crw\" (UniqueName: \"kubernetes.io/projected/16fbaba1-bd32-4121-8743-99422db74180-kube-api-access-f2crw\") pod \"ovn-controller-ovs-7qdpg\" (UID: \"16fbaba1-bd32-4121-8743-99422db74180\") " pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.785897 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2c9ac17-43ef-4ccb-83b1-e20ee03289de-combined-ca-bundle\") pod \"ovn-controller-cr94m\" (UID: \"e2c9ac17-43ef-4ccb-83b1-e20ee03289de\") " pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.855503 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.858620 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.863448 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-4dbr9" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.863507 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.863763 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.863910 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.865730 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.887487 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.951056 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cr94m" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.959246 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.959335 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a9e6735-4159-4248-a8f5-6714d386901a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.959369 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a9e6735-4159-4248-a8f5-6714d386901a-config\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.959395 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a9e6735-4159-4248-a8f5-6714d386901a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.959423 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5a9e6735-4159-4248-a8f5-6714d386901a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.959445 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a9e6735-4159-4248-a8f5-6714d386901a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.959462 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57n8f\" (UniqueName: \"kubernetes.io/projected/5a9e6735-4159-4248-a8f5-6714d386901a-kube-api-access-57n8f\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.959487 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9e6735-4159-4248-a8f5-6714d386901a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:09 crc kubenswrapper[5012]: I0219 05:41:09.972595 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.060836 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5a9e6735-4159-4248-a8f5-6714d386901a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.060892 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a9e6735-4159-4248-a8f5-6714d386901a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.060925 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57n8f\" (UniqueName: \"kubernetes.io/projected/5a9e6735-4159-4248-a8f5-6714d386901a-kube-api-access-57n8f\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.060956 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9e6735-4159-4248-a8f5-6714d386901a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.061015 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.061054 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a9e6735-4159-4248-a8f5-6714d386901a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.061087 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a9e6735-4159-4248-a8f5-6714d386901a-config\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.061108 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a9e6735-4159-4248-a8f5-6714d386901a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.062264 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.062513 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5a9e6735-4159-4248-a8f5-6714d386901a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.062661 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a9e6735-4159-4248-a8f5-6714d386901a-config\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.062789 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a9e6735-4159-4248-a8f5-6714d386901a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.067082 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a9e6735-4159-4248-a8f5-6714d386901a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.070733 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a9e6735-4159-4248-a8f5-6714d386901a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.082045 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a9e6735-4159-4248-a8f5-6714d386901a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.083160 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57n8f\" (UniqueName: \"kubernetes.io/projected/5a9e6735-4159-4248-a8f5-6714d386901a-kube-api-access-57n8f\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.083996 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5a9e6735-4159-4248-a8f5-6714d386901a\") " pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:10 crc kubenswrapper[5012]: I0219 05:41:10.183225 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.437514 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.439483 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.444268 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.444688 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-cpr8c" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.444945 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.445724 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.452041 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.538393 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wq4x\" (UniqueName: \"kubernetes.io/projected/00790bd0-5fbb-4927-8361-085c9691c171-kube-api-access-9wq4x\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.538440 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.538513 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00790bd0-5fbb-4927-8361-085c9691c171-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.538539 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/00790bd0-5fbb-4927-8361-085c9691c171-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.538590 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/00790bd0-5fbb-4927-8361-085c9691c171-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.538608 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00790bd0-5fbb-4927-8361-085c9691c171-config\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.538630 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00790bd0-5fbb-4927-8361-085c9691c171-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.538658 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/00790bd0-5fbb-4927-8361-085c9691c171-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.639901 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/00790bd0-5fbb-4927-8361-085c9691c171-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.639991 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/00790bd0-5fbb-4927-8361-085c9691c171-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.640029 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00790bd0-5fbb-4927-8361-085c9691c171-config\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.640049 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00790bd0-5fbb-4927-8361-085c9691c171-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.640076 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/00790bd0-5fbb-4927-8361-085c9691c171-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.640097 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wq4x\" (UniqueName: \"kubernetes.io/projected/00790bd0-5fbb-4927-8361-085c9691c171-kube-api-access-9wq4x\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.640124 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.640190 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00790bd0-5fbb-4927-8361-085c9691c171-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.641523 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00790bd0-5fbb-4927-8361-085c9691c171-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.642684 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/00790bd0-5fbb-4927-8361-085c9691c171-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.642819 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00790bd0-5fbb-4927-8361-085c9691c171-config\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.642983 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.650234 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/00790bd0-5fbb-4927-8361-085c9691c171-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.652824 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00790bd0-5fbb-4927-8361-085c9691c171-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.655526 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/00790bd0-5fbb-4927-8361-085c9691c171-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.660469 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wq4x\" (UniqueName: \"kubernetes.io/projected/00790bd0-5fbb-4927-8361-085c9691c171-kube-api-access-9wq4x\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.666787 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"00790bd0-5fbb-4927-8361-085c9691c171\") " pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:13 crc kubenswrapper[5012]: I0219 05:41:13.765217 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:21 crc kubenswrapper[5012]: I0219 05:41:21.878526 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 05:41:25 crc kubenswrapper[5012]: E0219 05:41:25.156443 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Feb 19 05:41:25 crc kubenswrapper[5012]: E0219 05:41:25.156896 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Feb 19 05:41:25 crc kubenswrapper[5012]: E0219 05:41:25.157024 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jcbl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(6c04ef21-3d68-44e8-ba69-164f3b32b7a0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 05:41:25 crc kubenswrapper[5012]: E0219 05:41:25.158822 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="6c04ef21-3d68-44e8-ba69-164f3b32b7a0" Feb 19 05:41:25 crc kubenswrapper[5012]: I0219 05:41:25.698543 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7qdpg"] Feb 19 05:41:25 crc kubenswrapper[5012]: I0219 05:41:25.839059 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5a9e6735-4159-4248-a8f5-6714d386901a","Type":"ContainerStarted","Data":"2c6a79ea5c6119196f3da355e77e22d680a3eca004fa8ac8fee6d4e710f0e13e"} Feb 19 05:41:25 crc kubenswrapper[5012]: E0219 05:41:25.843495 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb\\\"\"" pod="openstack/kube-state-metrics-0" podUID="6c04ef21-3d68-44e8-ba69-164f3b32b7a0" Feb 19 05:41:29 crc kubenswrapper[5012]: I0219 05:41:29.875743 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7qdpg" event={"ID":"16fbaba1-bd32-4121-8743-99422db74180","Type":"ContainerStarted","Data":"39a6bd400d41740054527b3f52c850bca672c6e54784d97eb1a4cd34a485c239"} Feb 19 05:41:30 crc kubenswrapper[5012]: I0219 05:41:30.069762 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cr94m"] Feb 19 05:41:30 crc kubenswrapper[5012]: I0219 05:41:30.212199 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 05:41:31 crc kubenswrapper[5012]: W0219 05:41:31.860227 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2c9ac17_43ef_4ccb_83b1_e20ee03289de.slice/crio-235c8db2831222c53397bad4dd52402a9fb9e7fae73bb29dbc9edb0fdc48bdec WatchSource:0}: Error finding container 235c8db2831222c53397bad4dd52402a9fb9e7fae73bb29dbc9edb0fdc48bdec: Status 404 returned error can't find the container with id 235c8db2831222c53397bad4dd52402a9fb9e7fae73bb29dbc9edb0fdc48bdec Feb 19 05:41:31 crc kubenswrapper[5012]: I0219 05:41:31.898781 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cr94m" event={"ID":"e2c9ac17-43ef-4ccb-83b1-e20ee03289de","Type":"ContainerStarted","Data":"235c8db2831222c53397bad4dd52402a9fb9e7fae73bb29dbc9edb0fdc48bdec"} Feb 19 05:41:35 crc kubenswrapper[5012]: E0219 05:41:35.173015 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Feb 19 05:41:35 crc kubenswrapper[5012]: E0219 05:41:35.174024 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Feb 19 05:41:35 crc kubenswrapper[5012]: E0219 05:41:35.174222 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gshm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8468885bfc-vwkhm_openstack(b2c96faf-42fc-437a-894d-e1c7f75b3511): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:41:35 crc kubenswrapper[5012]: E0219 05:41:35.178433 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" podUID="b2c96faf-42fc-437a-894d-e1c7f75b3511" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.436866 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.437199 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.437431 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qpfwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod notifications-rabbitmq-server-0_openstack(3c628866-f96d-4e7b-8846-7073c98dd389): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.438555 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/notifications-rabbitmq-server-0" podUID="3c628866-f96d-4e7b-8846-7073c98dd389" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.479895 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.479983 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.480185 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n684h65fh56h6fh87h85h57h76h5b7h94hffh649hfbh8ch5bch56fh5c5hbh86hf9h99h5dch95h66hd5h555h566h646h546h79h9dh55dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gwbv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-59ddbc48b7-4t5tr_openstack(4a48a7cc-f140-4802-8dd4-2f4bb1c62aed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.481622 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" podUID="4a48a7cc-f140-4802-8dd4-2f4bb1c62aed" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.510142 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.510192 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.510345 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kkzpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-545d49fd5c-td7mg_openstack(862b02ed-ae65-4348-8a31-81c1aff80089): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.511536 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" podUID="862b02ed-ae65-4348-8a31-81c1aff80089" Feb 19 05:41:36 crc kubenswrapper[5012]: I0219 05:41:36.517283 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.546690 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.546746 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.546865 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n658h5c5h88h68dhb6h57dhd4h697hb8h8fh74hb7h54fh54dh548h7h55dhb8h9fh55dh688h5bbh5d5h675h669hb7h67hbbhffh668h5c7hc5q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvbph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-65886c9755-l2845_openstack(57e2c914-87bd-46f8-92c7-e87437f6758a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.548068 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-65886c9755-l2845" podUID="57e2c914-87bd-46f8-92c7-e87437f6758a" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.614995 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.615071 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.615214 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zch8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(a13d3004-2045-4daf-a925-7eccf541b1b4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.617227 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="a13d3004-2045-4daf-a925-7eccf541b1b4" Feb 19 05:41:36 crc kubenswrapper[5012]: I0219 05:41:36.684090 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gshm7\" (UniqueName: \"kubernetes.io/projected/b2c96faf-42fc-437a-894d-e1c7f75b3511-kube-api-access-gshm7\") pod \"b2c96faf-42fc-437a-894d-e1c7f75b3511\" (UID: \"b2c96faf-42fc-437a-894d-e1c7f75b3511\") " Feb 19 05:41:36 crc kubenswrapper[5012]: I0219 05:41:36.684143 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c96faf-42fc-437a-894d-e1c7f75b3511-config\") pod \"b2c96faf-42fc-437a-894d-e1c7f75b3511\" (UID: \"b2c96faf-42fc-437a-894d-e1c7f75b3511\") " Feb 19 05:41:36 crc kubenswrapper[5012]: I0219 05:41:36.684931 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2c96faf-42fc-437a-894d-e1c7f75b3511-config" (OuterVolumeSpecName: "config") pod "b2c96faf-42fc-437a-894d-e1c7f75b3511" (UID: "b2c96faf-42fc-437a-894d-e1c7f75b3511"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:36 crc kubenswrapper[5012]: I0219 05:41:36.728436 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c96faf-42fc-437a-894d-e1c7f75b3511-kube-api-access-gshm7" (OuterVolumeSpecName: "kube-api-access-gshm7") pod "b2c96faf-42fc-437a-894d-e1c7f75b3511" (UID: "b2c96faf-42fc-437a-894d-e1c7f75b3511"). InnerVolumeSpecName "kube-api-access-gshm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:41:36 crc kubenswrapper[5012]: I0219 05:41:36.785512 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gshm7\" (UniqueName: \"kubernetes.io/projected/b2c96faf-42fc-437a-894d-e1c7f75b3511-kube-api-access-gshm7\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:36 crc kubenswrapper[5012]: I0219 05:41:36.785539 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c96faf-42fc-437a-894d-e1c7f75b3511-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:36 crc kubenswrapper[5012]: I0219 05:41:36.947520 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"00790bd0-5fbb-4927-8361-085c9691c171","Type":"ContainerStarted","Data":"b445ec4833440a541a722f31831dc1bac99cd73ab1cbec06d26f61e4470fd929"} Feb 19 05:41:36 crc kubenswrapper[5012]: I0219 05:41:36.949655 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" event={"ID":"b2c96faf-42fc-437a-894d-e1c7f75b3511","Type":"ContainerDied","Data":"39eed3d52a31374f301920c96915c3a090730af8bc3bc407b5e57355ea12607b"} Feb 19 05:41:36 crc kubenswrapper[5012]: I0219 05:41:36.949723 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8468885bfc-vwkhm" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.952058 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="a13d3004-2045-4daf-a925-7eccf541b1b4" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.953589 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current\\\"\"" pod="openstack/dnsmasq-dns-65886c9755-l2845" podUID="57e2c914-87bd-46f8-92c7-e87437f6758a" Feb 19 05:41:36 crc kubenswrapper[5012]: E0219 05:41:36.954051 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current\\\"\"" pod="openstack/notifications-rabbitmq-server-0" podUID="3c628866-f96d-4e7b-8846-7073c98dd389" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.046114 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-vwkhm"] Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.064594 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-vwkhm"] Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.362644 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.365026 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.502185 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkzpw\" (UniqueName: \"kubernetes.io/projected/862b02ed-ae65-4348-8a31-81c1aff80089-kube-api-access-kkzpw\") pod \"862b02ed-ae65-4348-8a31-81c1aff80089\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.502692 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gwbv\" (UniqueName: \"kubernetes.io/projected/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-kube-api-access-5gwbv\") pod \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.502722 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-dns-svc\") pod \"862b02ed-ae65-4348-8a31-81c1aff80089\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.502757 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-dns-svc\") pod \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.502819 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-config\") pod \"862b02ed-ae65-4348-8a31-81c1aff80089\" (UID: \"862b02ed-ae65-4348-8a31-81c1aff80089\") " Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.502857 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-config\") pod \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\" (UID: \"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed\") " Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.503568 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4a48a7cc-f140-4802-8dd4-2f4bb1c62aed" (UID: "4a48a7cc-f140-4802-8dd4-2f4bb1c62aed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.503869 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "862b02ed-ae65-4348-8a31-81c1aff80089" (UID: "862b02ed-ae65-4348-8a31-81c1aff80089"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.504150 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-config" (OuterVolumeSpecName: "config") pod "862b02ed-ae65-4348-8a31-81c1aff80089" (UID: "862b02ed-ae65-4348-8a31-81c1aff80089"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.505583 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-kube-api-access-5gwbv" (OuterVolumeSpecName: "kube-api-access-5gwbv") pod "4a48a7cc-f140-4802-8dd4-2f4bb1c62aed" (UID: "4a48a7cc-f140-4802-8dd4-2f4bb1c62aed"). InnerVolumeSpecName "kube-api-access-5gwbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.505581 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-config" (OuterVolumeSpecName: "config") pod "4a48a7cc-f140-4802-8dd4-2f4bb1c62aed" (UID: "4a48a7cc-f140-4802-8dd4-2f4bb1c62aed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.506093 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/862b02ed-ae65-4348-8a31-81c1aff80089-kube-api-access-kkzpw" (OuterVolumeSpecName: "kube-api-access-kkzpw") pod "862b02ed-ae65-4348-8a31-81c1aff80089" (UID: "862b02ed-ae65-4348-8a31-81c1aff80089"). InnerVolumeSpecName "kube-api-access-kkzpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.603985 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.604092 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.604175 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkzpw\" (UniqueName: \"kubernetes.io/projected/862b02ed-ae65-4348-8a31-81c1aff80089-kube-api-access-kkzpw\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.604259 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gwbv\" (UniqueName: \"kubernetes.io/projected/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-kube-api-access-5gwbv\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.604340 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/862b02ed-ae65-4348-8a31-81c1aff80089-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.604405 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.958882 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"38a4a51f-c380-48fc-8f0e-cdd1ea09fa53","Type":"ContainerStarted","Data":"d71e777540c60b9d720ba610439fceea883ea752400b2d1d8790461bf48312f2"} Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.960026 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.961821 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1fd0c672-e258-4feb-8bbd-26135f92f7fb","Type":"ContainerStarted","Data":"a8e754bcf301635d8dc3f5a9e704295059c792c19bbabbb8a572e39943ecb2ef"} Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.963889 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" event={"ID":"4a48a7cc-f140-4802-8dd4-2f4bb1c62aed","Type":"ContainerDied","Data":"100f9d605f088bd90ac2e34324dc80fd0839075a52b9732932e0a541bd1a7b13"} Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.963985 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59ddbc48b7-4t5tr" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.970524 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"04466d10-2177-4361-bd86-333c046b9e52","Type":"ContainerStarted","Data":"819d4936f72ac1e78543d7aa12ff4e73a784d1c30ba80b58ffdf950f2bf4e356"} Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.971999 5012 generic.go:334] "Generic (PLEG): container finished" podID="79e01828-7818-4fe8-bd3f-8d39e9bf939c" containerID="110e2fb48dbdbaaee96e12fd6145e56296c9e6c4ec3ed95da58954f821868b52" exitCode=0 Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.972077 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" event={"ID":"79e01828-7818-4fe8-bd3f-8d39e9bf939c","Type":"ContainerDied","Data":"110e2fb48dbdbaaee96e12fd6145e56296c9e6c4ec3ed95da58954f821868b52"} Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.972964 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" event={"ID":"862b02ed-ae65-4348-8a31-81c1aff80089","Type":"ContainerDied","Data":"4e46c1414bde8bffdfc3f7f7ffe96ab966534855c2a4f448c23174850398a8f0"} Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.972990 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545d49fd5c-td7mg" Feb 19 05:41:37 crc kubenswrapper[5012]: I0219 05:41:37.986955 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.589205015 podStartE2EDuration="33.986907663s" podCreationTimestamp="2026-02-19 05:41:04 +0000 UTC" firstStartedPulling="2026-02-19 05:41:04.994629623 +0000 UTC m=+961.027952192" lastFinishedPulling="2026-02-19 05:41:37.392332271 +0000 UTC m=+993.425654840" observedRunningTime="2026-02-19 05:41:37.984752429 +0000 UTC m=+994.018074998" watchObservedRunningTime="2026-02-19 05:41:37.986907663 +0000 UTC m=+994.020230252" Feb 19 05:41:38 crc kubenswrapper[5012]: I0219 05:41:38.122555 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59ddbc48b7-4t5tr"] Feb 19 05:41:38 crc kubenswrapper[5012]: I0219 05:41:38.130443 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59ddbc48b7-4t5tr"] Feb 19 05:41:38 crc kubenswrapper[5012]: I0219 05:41:38.148640 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-td7mg"] Feb 19 05:41:38 crc kubenswrapper[5012]: I0219 05:41:38.159592 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-td7mg"] Feb 19 05:41:38 crc kubenswrapper[5012]: I0219 05:41:38.723016 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a48a7cc-f140-4802-8dd4-2f4bb1c62aed" path="/var/lib/kubelet/pods/4a48a7cc-f140-4802-8dd4-2f4bb1c62aed/volumes" Feb 19 05:41:38 crc kubenswrapper[5012]: I0219 05:41:38.724025 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="862b02ed-ae65-4348-8a31-81c1aff80089" path="/var/lib/kubelet/pods/862b02ed-ae65-4348-8a31-81c1aff80089/volumes" Feb 19 05:41:38 crc kubenswrapper[5012]: I0219 05:41:38.724536 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2c96faf-42fc-437a-894d-e1c7f75b3511" path="/var/lib/kubelet/pods/b2c96faf-42fc-437a-894d-e1c7f75b3511/volumes" Feb 19 05:41:38 crc kubenswrapper[5012]: I0219 05:41:38.982849 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0095712-262e-4562-afac-0f2f4372224d","Type":"ContainerStarted","Data":"1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f"} Feb 19 05:41:38 crc kubenswrapper[5012]: I0219 05:41:38.985796 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" event={"ID":"79e01828-7818-4fe8-bd3f-8d39e9bf939c","Type":"ContainerStarted","Data":"8f0dc1aa57e08411f9d0f619e65ecab31defd41e57bdd287ce850d95e5dc2423"} Feb 19 05:41:38 crc kubenswrapper[5012]: I0219 05:41:38.986671 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:41:39 crc kubenswrapper[5012]: I0219 05:41:39.031139 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" podStartSLOduration=3.217681985 podStartE2EDuration="40.031120634s" podCreationTimestamp="2026-02-19 05:40:59 +0000 UTC" firstStartedPulling="2026-02-19 05:41:00.645526217 +0000 UTC m=+956.678848786" lastFinishedPulling="2026-02-19 05:41:37.458964866 +0000 UTC m=+993.492287435" observedRunningTime="2026-02-19 05:41:39.01861027 +0000 UTC m=+995.051932839" watchObservedRunningTime="2026-02-19 05:41:39.031120634 +0000 UTC m=+995.064443203" Feb 19 05:41:41 crc kubenswrapper[5012]: I0219 05:41:41.005538 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e31edbd-c20b-420d-8888-cafb392410cd","Type":"ContainerStarted","Data":"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6"} Feb 19 05:41:42 crc kubenswrapper[5012]: I0219 05:41:42.013929 5012 generic.go:334] "Generic (PLEG): container finished" podID="16fbaba1-bd32-4121-8743-99422db74180" containerID="06ec94d8cf824c0dd76739679c91a4936003c296b953898581c57a6b59543f08" exitCode=0 Feb 19 05:41:42 crc kubenswrapper[5012]: I0219 05:41:42.014223 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7qdpg" event={"ID":"16fbaba1-bd32-4121-8743-99422db74180","Type":"ContainerDied","Data":"06ec94d8cf824c0dd76739679c91a4936003c296b953898581c57a6b59543f08"} Feb 19 05:41:42 crc kubenswrapper[5012]: I0219 05:41:42.016442 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6c04ef21-3d68-44e8-ba69-164f3b32b7a0","Type":"ContainerStarted","Data":"caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73"} Feb 19 05:41:42 crc kubenswrapper[5012]: I0219 05:41:42.017253 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 05:41:42 crc kubenswrapper[5012]: I0219 05:41:42.020622 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"00790bd0-5fbb-4927-8361-085c9691c171","Type":"ContainerStarted","Data":"9af1ad265e37ae3ab34247dd530f54788d2aeafa36169002f4f1ddfe2730e33d"} Feb 19 05:41:42 crc kubenswrapper[5012]: I0219 05:41:42.025149 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cr94m" event={"ID":"e2c9ac17-43ef-4ccb-83b1-e20ee03289de","Type":"ContainerStarted","Data":"2bf4f2cf3692bc11c471b18877b32f1b54456d1cebc4847f44252fd08d84746f"} Feb 19 05:41:42 crc kubenswrapper[5012]: I0219 05:41:42.025729 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-cr94m" Feb 19 05:41:42 crc kubenswrapper[5012]: I0219 05:41:42.027383 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5a9e6735-4159-4248-a8f5-6714d386901a","Type":"ContainerStarted","Data":"94bc8be702ca89d9fa7574a6fb62d07e6b869f19201ca8f05480725db70a91a2"} Feb 19 05:41:42 crc kubenswrapper[5012]: I0219 05:41:42.053205 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-cr94m" podStartSLOduration=24.079214195 podStartE2EDuration="33.053187847s" podCreationTimestamp="2026-02-19 05:41:09 +0000 UTC" firstStartedPulling="2026-02-19 05:41:31.865638038 +0000 UTC m=+987.898960647" lastFinishedPulling="2026-02-19 05:41:40.83961173 +0000 UTC m=+996.872934299" observedRunningTime="2026-02-19 05:41:42.052063529 +0000 UTC m=+998.085386118" watchObservedRunningTime="2026-02-19 05:41:42.053187847 +0000 UTC m=+998.086510416" Feb 19 05:41:42 crc kubenswrapper[5012]: I0219 05:41:42.073141 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.404275761 podStartE2EDuration="36.073127308s" podCreationTimestamp="2026-02-19 05:41:06 +0000 UTC" firstStartedPulling="2026-02-19 05:41:07.502183462 +0000 UTC m=+963.535506021" lastFinishedPulling="2026-02-19 05:41:41.171034999 +0000 UTC m=+997.204357568" observedRunningTime="2026-02-19 05:41:42.07081616 +0000 UTC m=+998.104138739" watchObservedRunningTime="2026-02-19 05:41:42.073127308 +0000 UTC m=+998.106449867" Feb 19 05:41:43 crc kubenswrapper[5012]: I0219 05:41:43.044422 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7qdpg" event={"ID":"16fbaba1-bd32-4121-8743-99422db74180","Type":"ContainerStarted","Data":"941d1a59bc70fb616fd4c55a311743b95c05c720a0509a2462c3f859fa196b57"} Feb 19 05:41:44 crc kubenswrapper[5012]: I0219 05:41:44.053467 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5a9e6735-4159-4248-a8f5-6714d386901a","Type":"ContainerStarted","Data":"9175750f7b363090ca147be00bf16c86f82d6d1ee52b66797a25314d9cd24fc3"} Feb 19 05:41:44 crc kubenswrapper[5012]: I0219 05:41:44.056582 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7qdpg" event={"ID":"16fbaba1-bd32-4121-8743-99422db74180","Type":"ContainerStarted","Data":"0023846d8083910d6ad0b807959a9326119e3f0edf0873e00b02493f7a10978f"} Feb 19 05:41:44 crc kubenswrapper[5012]: I0219 05:41:44.056821 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:44 crc kubenswrapper[5012]: I0219 05:41:44.056867 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:41:44 crc kubenswrapper[5012]: I0219 05:41:44.059071 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"00790bd0-5fbb-4927-8361-085c9691c171","Type":"ContainerStarted","Data":"1e90b548675c0826432d31830abdf314c6cab338800b3959a0399a4191b6c30a"} Feb 19 05:41:44 crc kubenswrapper[5012]: I0219 05:41:44.090536 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=18.358913966 podStartE2EDuration="36.090510454s" podCreationTimestamp="2026-02-19 05:41:08 +0000 UTC" firstStartedPulling="2026-02-19 05:41:25.164670975 +0000 UTC m=+981.197993584" lastFinishedPulling="2026-02-19 05:41:42.896267503 +0000 UTC m=+998.929590072" observedRunningTime="2026-02-19 05:41:44.082706048 +0000 UTC m=+1000.116028657" watchObservedRunningTime="2026-02-19 05:41:44.090510454 +0000 UTC m=+1000.123833063" Feb 19 05:41:44 crc kubenswrapper[5012]: I0219 05:41:44.160191 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=25.684358249 podStartE2EDuration="32.160165654s" podCreationTimestamp="2026-02-19 05:41:12 +0000 UTC" firstStartedPulling="2026-02-19 05:41:36.438530442 +0000 UTC m=+992.471853021" lastFinishedPulling="2026-02-19 05:41:42.914337867 +0000 UTC m=+998.947660426" observedRunningTime="2026-02-19 05:41:44.113874941 +0000 UTC m=+1000.147197540" watchObservedRunningTime="2026-02-19 05:41:44.160165654 +0000 UTC m=+1000.193488233" Feb 19 05:41:44 crc kubenswrapper[5012]: I0219 05:41:44.165193 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-7qdpg" podStartSLOduration=23.93139257 podStartE2EDuration="35.16518149s" podCreationTimestamp="2026-02-19 05:41:09 +0000 UTC" firstStartedPulling="2026-02-19 05:41:29.584229607 +0000 UTC m=+985.617552176" lastFinishedPulling="2026-02-19 05:41:40.818018517 +0000 UTC m=+996.851341096" observedRunningTime="2026-02-19 05:41:44.157263941 +0000 UTC m=+1000.190586530" watchObservedRunningTime="2026-02-19 05:41:44.16518149 +0000 UTC m=+1000.198504069" Feb 19 05:41:44 crc kubenswrapper[5012]: I0219 05:41:44.619418 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.133506 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.185554 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.197512 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65886c9755-l2845"] Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.566275 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.690706 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-dns-svc\") pod \"57e2c914-87bd-46f8-92c7-e87437f6758a\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.691204 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "57e2c914-87bd-46f8-92c7-e87437f6758a" (UID: "57e2c914-87bd-46f8-92c7-e87437f6758a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.691391 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-config\") pod \"57e2c914-87bd-46f8-92c7-e87437f6758a\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.691644 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-config" (OuterVolumeSpecName: "config") pod "57e2c914-87bd-46f8-92c7-e87437f6758a" (UID: "57e2c914-87bd-46f8-92c7-e87437f6758a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.691696 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvbph\" (UniqueName: \"kubernetes.io/projected/57e2c914-87bd-46f8-92c7-e87437f6758a-kube-api-access-tvbph\") pod \"57e2c914-87bd-46f8-92c7-e87437f6758a\" (UID: \"57e2c914-87bd-46f8-92c7-e87437f6758a\") " Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.693151 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.693201 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57e2c914-87bd-46f8-92c7-e87437f6758a-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.819186 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57e2c914-87bd-46f8-92c7-e87437f6758a-kube-api-access-tvbph" (OuterVolumeSpecName: "kube-api-access-tvbph") pod "57e2c914-87bd-46f8-92c7-e87437f6758a" (UID: "57e2c914-87bd-46f8-92c7-e87437f6758a"). InnerVolumeSpecName "kube-api-access-tvbph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:41:45 crc kubenswrapper[5012]: I0219 05:41:45.896948 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvbph\" (UniqueName: \"kubernetes.io/projected/57e2c914-87bd-46f8-92c7-e87437f6758a-kube-api-access-tvbph\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.074827 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65886c9755-l2845" event={"ID":"57e2c914-87bd-46f8-92c7-e87437f6758a","Type":"ContainerDied","Data":"2ef7e60f2849b48568b2db26b7cbdccc8e5409326bef7865ef80fbf90f513b0e"} Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.074875 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65886c9755-l2845" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.135376 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65886c9755-l2845"] Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.147564 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65886c9755-l2845"] Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.184526 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.232861 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.722717 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57e2c914-87bd-46f8-92c7-e87437f6758a" path="/var/lib/kubelet/pods/57e2c914-87bd-46f8-92c7-e87437f6758a/volumes" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.770002 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.779413 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-549878d5d7-z4hbd"] Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.781605 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.793867 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-549878d5d7-z4hbd"] Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.858864 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.884035 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.937361 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-dns-svc\") pod \"dnsmasq-dns-549878d5d7-z4hbd\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.937431 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-config\") pod \"dnsmasq-dns-549878d5d7-z4hbd\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:46 crc kubenswrapper[5012]: I0219 05:41:46.937487 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt9v8\" (UniqueName: \"kubernetes.io/projected/87a37a10-9d54-42b4-b1ec-a841d2836207-kube-api-access-nt9v8\") pod \"dnsmasq-dns-549878d5d7-z4hbd\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.039429 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-config\") pod \"dnsmasq-dns-549878d5d7-z4hbd\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.039526 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt9v8\" (UniqueName: \"kubernetes.io/projected/87a37a10-9d54-42b4-b1ec-a841d2836207-kube-api-access-nt9v8\") pod \"dnsmasq-dns-549878d5d7-z4hbd\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.039620 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-dns-svc\") pod \"dnsmasq-dns-549878d5d7-z4hbd\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.040690 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-dns-svc\") pod \"dnsmasq-dns-549878d5d7-z4hbd\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.040735 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-config\") pod \"dnsmasq-dns-549878d5d7-z4hbd\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.057348 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt9v8\" (UniqueName: \"kubernetes.io/projected/87a37a10-9d54-42b4-b1ec-a841d2836207-kube-api-access-nt9v8\") pod \"dnsmasq-dns-549878d5d7-z4hbd\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.083059 5012 generic.go:334] "Generic (PLEG): container finished" podID="04466d10-2177-4361-bd86-333c046b9e52" containerID="819d4936f72ac1e78543d7aa12ff4e73a784d1c30ba80b58ffdf950f2bf4e356" exitCode=0 Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.083146 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"04466d10-2177-4361-bd86-333c046b9e52","Type":"ContainerDied","Data":"819d4936f72ac1e78543d7aa12ff4e73a784d1c30ba80b58ffdf950f2bf4e356"} Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.083667 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.133195 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.146750 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.146998 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.325934 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-549878d5d7-z4hbd"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.340864 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-695d4f5557-sf54g"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.351110 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.356229 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.365753 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-695d4f5557-sf54g"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.426550 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-mz9j9"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.427895 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.431610 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.439680 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mz9j9"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.453419 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-config\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.453715 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b44m7\" (UniqueName: \"kubernetes.io/projected/11b0e720-e74b-43f8-b8f3-207b35594187-kube-api-access-b44m7\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.453900 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-ovsdbserver-sb\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.453950 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-dns-svc\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.497640 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-695d4f5557-sf54g"] Feb 19 05:41:47 crc kubenswrapper[5012]: E0219 05:41:47.498185 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-b44m7 ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-695d4f5557-sf54g" podUID="11b0e720-e74b-43f8-b8f3-207b35594187" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.540437 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bf9dcd95-lzm7b"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.543572 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.547331 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.556343 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c711491e-0b8b-4737-88c9-bc5e37051ac1-ovn-rundir\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.556409 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-config\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.556450 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr4sn\" (UniqueName: \"kubernetes.io/projected/c711491e-0b8b-4737-88c9-bc5e37051ac1-kube-api-access-mr4sn\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.556483 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b44m7\" (UniqueName: \"kubernetes.io/projected/11b0e720-e74b-43f8-b8f3-207b35594187-kube-api-access-b44m7\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.556499 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c711491e-0b8b-4737-88c9-bc5e37051ac1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.556529 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c711491e-0b8b-4737-88c9-bc5e37051ac1-combined-ca-bundle\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.556554 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c711491e-0b8b-4737-88c9-bc5e37051ac1-ovs-rundir\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.556572 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-ovsdbserver-sb\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.556589 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-dns-svc\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.556620 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c711491e-0b8b-4737-88c9-bc5e37051ac1-config\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.557467 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-config\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.557902 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-ovsdbserver-sb\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.558040 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-dns-svc\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.575360 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bf9dcd95-lzm7b"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.584021 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b44m7\" (UniqueName: \"kubernetes.io/projected/11b0e720-e74b-43f8-b8f3-207b35594187-kube-api-access-b44m7\") pod \"dnsmasq-dns-695d4f5557-sf54g\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.618212 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.619706 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.633480 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.633885 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-plwmh" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.634133 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.634146 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.640873 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660204 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c711491e-0b8b-4737-88c9-bc5e37051ac1-combined-ca-bundle\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660256 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c711491e-0b8b-4737-88c9-bc5e37051ac1-ovs-rundir\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660287 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-sb\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660323 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660358 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c711491e-0b8b-4737-88c9-bc5e37051ac1-config\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660400 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-config\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660431 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c711491e-0b8b-4737-88c9-bc5e37051ac1-ovn-rundir\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660450 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25brm\" (UniqueName: \"kubernetes.io/projected/f22ec0c5-41a9-4f36-adb0-405e5a26d209-kube-api-access-25brm\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660492 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-nb\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660516 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660537 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-scripts\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660558 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr4sn\" (UniqueName: \"kubernetes.io/projected/c711491e-0b8b-4737-88c9-bc5e37051ac1-kube-api-access-mr4sn\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660573 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660598 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660617 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-dns-svc\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660633 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c711491e-0b8b-4737-88c9-bc5e37051ac1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660652 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw57d\" (UniqueName: \"kubernetes.io/projected/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-kube-api-access-bw57d\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.660671 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-config\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.661450 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c711491e-0b8b-4737-88c9-bc5e37051ac1-config\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.661673 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c711491e-0b8b-4737-88c9-bc5e37051ac1-ovn-rundir\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.662664 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c711491e-0b8b-4737-88c9-bc5e37051ac1-ovs-rundir\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.676366 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c711491e-0b8b-4737-88c9-bc5e37051ac1-combined-ca-bundle\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.676777 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c711491e-0b8b-4737-88c9-bc5e37051ac1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.690824 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr4sn\" (UniqueName: \"kubernetes.io/projected/c711491e-0b8b-4737-88c9-bc5e37051ac1-kube-api-access-mr4sn\") pod \"ovn-controller-metrics-mz9j9\" (UID: \"c711491e-0b8b-4737-88c9-bc5e37051ac1\") " pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.735660 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-549878d5d7-z4hbd"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.745609 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mz9j9" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.761778 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.761828 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-scripts\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.761854 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.761919 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.761936 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-dns-svc\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.761979 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw57d\" (UniqueName: \"kubernetes.io/projected/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-kube-api-access-bw57d\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.762006 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-config\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.762079 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-sb\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.762098 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.762159 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-config\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.762196 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25brm\" (UniqueName: \"kubernetes.io/projected/f22ec0c5-41a9-4f36-adb0-405e5a26d209-kube-api-access-25brm\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.762236 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-nb\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.763106 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-nb\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.763704 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-scripts\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.765039 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-config\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.766829 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-dns-svc\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.771729 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.771877 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-config\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.774006 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.774211 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-sb\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.775972 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.776436 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.778060 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw57d\" (UniqueName: \"kubernetes.io/projected/e3e8f67d-0748-4bff-b7c5-8432c7e4ab64-kube-api-access-bw57d\") pod \"ovn-northd-0\" (UID: \"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64\") " pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.782478 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25brm\" (UniqueName: \"kubernetes.io/projected/f22ec0c5-41a9-4f36-adb0-405e5a26d209-kube-api-access-25brm\") pod \"dnsmasq-dns-79bf9dcd95-lzm7b\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.859372 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.969334 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.976929 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.983145 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.988231 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.988451 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-q58gk" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.988578 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 19 05:41:47 crc kubenswrapper[5012]: I0219 05:41:47.988677 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.004155 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.074787 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.074869 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c089afc3-1655-4675-b4e1-a62ec6929498-lock\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.074905 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c089afc3-1655-4675-b4e1-a62ec6929498-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.074995 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zgg5\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-kube-api-access-8zgg5\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.075062 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c089afc3-1655-4675-b4e1-a62ec6929498-cache\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.075101 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.113126 5012 generic.go:334] "Generic (PLEG): container finished" podID="87a37a10-9d54-42b4-b1ec-a841d2836207" containerID="b4e25c113c7481f26d7d1cc0e975114480050ead6685f86ef56b5ca5e4c0cc32" exitCode=0 Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.113358 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" event={"ID":"87a37a10-9d54-42b4-b1ec-a841d2836207","Type":"ContainerDied","Data":"b4e25c113c7481f26d7d1cc0e975114480050ead6685f86ef56b5ca5e4c0cc32"} Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.113387 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" event={"ID":"87a37a10-9d54-42b4-b1ec-a841d2836207","Type":"ContainerStarted","Data":"9cf59006836035d8afc016102789032733760d2ec7f8061587b620acf3488db0"} Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.130235 5012 generic.go:334] "Generic (PLEG): container finished" podID="1fd0c672-e258-4feb-8bbd-26135f92f7fb" containerID="a8e754bcf301635d8dc3f5a9e704295059c792c19bbabbb8a572e39943ecb2ef" exitCode=0 Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.130294 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1fd0c672-e258-4feb-8bbd-26135f92f7fb","Type":"ContainerDied","Data":"a8e754bcf301635d8dc3f5a9e704295059c792c19bbabbb8a572e39943ecb2ef"} Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.145601 5012 generic.go:334] "Generic (PLEG): container finished" podID="1e31edbd-c20b-420d-8888-cafb392410cd" containerID="2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6" exitCode=0 Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.145687 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e31edbd-c20b-420d-8888-cafb392410cd","Type":"ContainerDied","Data":"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6"} Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.158439 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"04466d10-2177-4361-bd86-333c046b9e52","Type":"ContainerStarted","Data":"10b22f8ff53536eaa0e8f250f73cdee88b2784d8c00c54045e1b0d74df53d3e0"} Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.158624 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.198946 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c089afc3-1655-4675-b4e1-a62ec6929498-cache\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.199050 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.199106 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.199179 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c089afc3-1655-4675-b4e1-a62ec6929498-lock\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.199238 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c089afc3-1655-4675-b4e1-a62ec6929498-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.199357 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zgg5\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-kube-api-access-8zgg5\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.200473 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c089afc3-1655-4675-b4e1-a62ec6929498-cache\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.201362 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.206061 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c089afc3-1655-4675-b4e1-a62ec6929498-lock\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: E0219 05:41:48.206753 5012 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 05:41:48 crc kubenswrapper[5012]: E0219 05:41:48.206827 5012 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 05:41:48 crc kubenswrapper[5012]: E0219 05:41:48.206922 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift podName:c089afc3-1655-4675-b4e1-a62ec6929498 nodeName:}" failed. No retries permitted until 2026-02-19 05:41:48.706906888 +0000 UTC m=+1004.740229457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift") pod "swift-storage-0" (UID: "c089afc3-1655-4675-b4e1-a62ec6929498") : configmap "swift-ring-files" not found Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.214976 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=13.816273362 podStartE2EDuration="46.21495414s" podCreationTimestamp="2026-02-19 05:41:02 +0000 UTC" firstStartedPulling="2026-02-19 05:41:04.963508136 +0000 UTC m=+960.996830695" lastFinishedPulling="2026-02-19 05:41:37.362188904 +0000 UTC m=+993.395511473" observedRunningTime="2026-02-19 05:41:48.203611205 +0000 UTC m=+1004.236933794" watchObservedRunningTime="2026-02-19 05:41:48.21495414 +0000 UTC m=+1004.248276709" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.227884 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c089afc3-1655-4675-b4e1-a62ec6929498-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.237971 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mz9j9"] Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.244445 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zgg5\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-kube-api-access-8zgg5\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.251200 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.299193 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.357678 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bf9dcd95-lzm7b"] Feb 19 05:41:48 crc kubenswrapper[5012]: W0219 05:41:48.387929 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf22ec0c5_41a9_4f36_adb0_405e5a26d209.slice/crio-6bd9704878ce796ee545aeab88709706e42c6cb9f878bf8b26a1785cb4cf93bf WatchSource:0}: Error finding container 6bd9704878ce796ee545aeab88709706e42c6cb9f878bf8b26a1785cb4cf93bf: Status 404 returned error can't find the container with id 6bd9704878ce796ee545aeab88709706e42c6cb9f878bf8b26a1785cb4cf93bf Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.402812 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-config" (OuterVolumeSpecName: "config") pod "11b0e720-e74b-43f8-b8f3-207b35594187" (UID: "11b0e720-e74b-43f8-b8f3-207b35594187"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.402875 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-config\") pod \"11b0e720-e74b-43f8-b8f3-207b35594187\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.403049 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-ovsdbserver-sb\") pod \"11b0e720-e74b-43f8-b8f3-207b35594187\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.403167 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-dns-svc\") pod \"11b0e720-e74b-43f8-b8f3-207b35594187\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.403745 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "11b0e720-e74b-43f8-b8f3-207b35594187" (UID: "11b0e720-e74b-43f8-b8f3-207b35594187"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.404065 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "11b0e720-e74b-43f8-b8f3-207b35594187" (UID: "11b0e720-e74b-43f8-b8f3-207b35594187"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.404461 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b44m7\" (UniqueName: \"kubernetes.io/projected/11b0e720-e74b-43f8-b8f3-207b35594187-kube-api-access-b44m7\") pod \"11b0e720-e74b-43f8-b8f3-207b35594187\" (UID: \"11b0e720-e74b-43f8-b8f3-207b35594187\") " Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.408933 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11b0e720-e74b-43f8-b8f3-207b35594187-kube-api-access-b44m7" (OuterVolumeSpecName: "kube-api-access-b44m7") pod "11b0e720-e74b-43f8-b8f3-207b35594187" (UID: "11b0e720-e74b-43f8-b8f3-207b35594187"). InnerVolumeSpecName "kube-api-access-b44m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.409562 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.409613 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.409629 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11b0e720-e74b-43f8-b8f3-207b35594187-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.468443 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.511827 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b44m7\" (UniqueName: \"kubernetes.io/projected/11b0e720-e74b-43f8-b8f3-207b35594187-kube-api-access-b44m7\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.521482 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 05:41:48 crc kubenswrapper[5012]: W0219 05:41:48.533410 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3e8f67d_0748_4bff_b7c5_8432c7e4ab64.slice/crio-32eb408fc4e94735fe1da4222ae269096a83a1f02f502a8aa984b5e84249b30f WatchSource:0}: Error finding container 32eb408fc4e94735fe1da4222ae269096a83a1f02f502a8aa984b5e84249b30f: Status 404 returned error can't find the container with id 32eb408fc4e94735fe1da4222ae269096a83a1f02f502a8aa984b5e84249b30f Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.613485 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-config\") pod \"87a37a10-9d54-42b4-b1ec-a841d2836207\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.613759 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-dns-svc\") pod \"87a37a10-9d54-42b4-b1ec-a841d2836207\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.613890 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt9v8\" (UniqueName: \"kubernetes.io/projected/87a37a10-9d54-42b4-b1ec-a841d2836207-kube-api-access-nt9v8\") pod \"87a37a10-9d54-42b4-b1ec-a841d2836207\" (UID: \"87a37a10-9d54-42b4-b1ec-a841d2836207\") " Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.620440 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87a37a10-9d54-42b4-b1ec-a841d2836207-kube-api-access-nt9v8" (OuterVolumeSpecName: "kube-api-access-nt9v8") pod "87a37a10-9d54-42b4-b1ec-a841d2836207" (UID: "87a37a10-9d54-42b4-b1ec-a841d2836207"). InnerVolumeSpecName "kube-api-access-nt9v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.632904 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-config" (OuterVolumeSpecName: "config") pod "87a37a10-9d54-42b4-b1ec-a841d2836207" (UID: "87a37a10-9d54-42b4-b1ec-a841d2836207"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.633548 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "87a37a10-9d54-42b4-b1ec-a841d2836207" (UID: "87a37a10-9d54-42b4-b1ec-a841d2836207"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.715981 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.716080 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.716097 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt9v8\" (UniqueName: \"kubernetes.io/projected/87a37a10-9d54-42b4-b1ec-a841d2836207-kube-api-access-nt9v8\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:48 crc kubenswrapper[5012]: I0219 05:41:48.716111 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a37a10-9d54-42b4-b1ec-a841d2836207-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:41:48 crc kubenswrapper[5012]: E0219 05:41:48.716280 5012 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 05:41:48 crc kubenswrapper[5012]: E0219 05:41:48.716346 5012 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 05:41:48 crc kubenswrapper[5012]: E0219 05:41:48.716443 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift podName:c089afc3-1655-4675-b4e1-a62ec6929498 nodeName:}" failed. No retries permitted until 2026-02-19 05:41:49.716412221 +0000 UTC m=+1005.749734780 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift") pod "swift-storage-0" (UID: "c089afc3-1655-4675-b4e1-a62ec6929498") : configmap "swift-ring-files" not found Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.165731 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mz9j9" event={"ID":"c711491e-0b8b-4737-88c9-bc5e37051ac1","Type":"ContainerStarted","Data":"aab06d4f2c3375336ad944f107bcde4a55eead8b6008771d38c6fab07f604ea7"} Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.166118 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mz9j9" event={"ID":"c711491e-0b8b-4737-88c9-bc5e37051ac1","Type":"ContainerStarted","Data":"dd9b7de8fbd16d70fd18f28a60fbf2c541534b56ead82da3a614566b1be7ec6e"} Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.168011 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" event={"ID":"87a37a10-9d54-42b4-b1ec-a841d2836207","Type":"ContainerDied","Data":"9cf59006836035d8afc016102789032733760d2ec7f8061587b620acf3488db0"} Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.168063 5012 scope.go:117] "RemoveContainer" containerID="b4e25c113c7481f26d7d1cc0e975114480050ead6685f86ef56b5ca5e4c0cc32" Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.168119 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549878d5d7-z4hbd" Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.172047 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1fd0c672-e258-4feb-8bbd-26135f92f7fb","Type":"ContainerStarted","Data":"b8e7ff8da781605df4b50e10d2845af44ab26f79df0749f6633a5df64e1cdeaa"} Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.173646 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64","Type":"ContainerStarted","Data":"32eb408fc4e94735fe1da4222ae269096a83a1f02f502a8aa984b5e84249b30f"} Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.175202 5012 generic.go:334] "Generic (PLEG): container finished" podID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerID="d961f9b5d55a9bfaff596c3b756f78502ea40069f8fb1a18443be8e579f64c1b" exitCode=0 Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.175268 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-695d4f5557-sf54g" Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.175250 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" event={"ID":"f22ec0c5-41a9-4f36-adb0-405e5a26d209","Type":"ContainerDied","Data":"d961f9b5d55a9bfaff596c3b756f78502ea40069f8fb1a18443be8e579f64c1b"} Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.175380 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" event={"ID":"f22ec0c5-41a9-4f36-adb0-405e5a26d209","Type":"ContainerStarted","Data":"6bd9704878ce796ee545aeab88709706e42c6cb9f878bf8b26a1785cb4cf93bf"} Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.236943 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=14.271588441 podStartE2EDuration="48.236910101s" podCreationTimestamp="2026-02-19 05:41:01 +0000 UTC" firstStartedPulling="2026-02-19 05:41:03.343120283 +0000 UTC m=+959.376442852" lastFinishedPulling="2026-02-19 05:41:37.308441953 +0000 UTC m=+993.341764512" observedRunningTime="2026-02-19 05:41:49.235736642 +0000 UTC m=+1005.269059211" watchObservedRunningTime="2026-02-19 05:41:49.236910101 +0000 UTC m=+1005.270232670" Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.241732 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-mz9j9" podStartSLOduration=2.241725592 podStartE2EDuration="2.241725592s" podCreationTimestamp="2026-02-19 05:41:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:41:49.189209973 +0000 UTC m=+1005.222532542" watchObservedRunningTime="2026-02-19 05:41:49.241725592 +0000 UTC m=+1005.275048161" Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.312365 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-695d4f5557-sf54g"] Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.337650 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-695d4f5557-sf54g"] Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.343858 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-549878d5d7-z4hbd"] Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.349202 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-549878d5d7-z4hbd"] Feb 19 05:41:49 crc kubenswrapper[5012]: I0219 05:41:49.735185 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:49 crc kubenswrapper[5012]: E0219 05:41:49.736629 5012 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 05:41:49 crc kubenswrapper[5012]: E0219 05:41:49.736644 5012 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 05:41:49 crc kubenswrapper[5012]: E0219 05:41:49.736680 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift podName:c089afc3-1655-4675-b4e1-a62ec6929498 nodeName:}" failed. No retries permitted until 2026-02-19 05:41:51.73666657 +0000 UTC m=+1007.769989139 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift") pod "swift-storage-0" (UID: "c089afc3-1655-4675-b4e1-a62ec6929498") : configmap "swift-ring-files" not found Feb 19 05:41:50 crc kubenswrapper[5012]: I0219 05:41:50.209819 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64","Type":"ContainerStarted","Data":"eda44d9bc80983dd7021f281b18a7b62b552db2c1bc972d5c8c5f911d7a7d392"} Feb 19 05:41:50 crc kubenswrapper[5012]: I0219 05:41:50.209871 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"e3e8f67d-0748-4bff-b7c5-8432c7e4ab64","Type":"ContainerStarted","Data":"c0db4f523c9f822a4d993669cc5337e8476abe653273676901f2ae54825cbf26"} Feb 19 05:41:50 crc kubenswrapper[5012]: I0219 05:41:50.210396 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 19 05:41:50 crc kubenswrapper[5012]: I0219 05:41:50.227295 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" event={"ID":"f22ec0c5-41a9-4f36-adb0-405e5a26d209","Type":"ContainerStarted","Data":"7ff9e9710973d65273f4c7d1b2b07184b8147f2ccbf37eac212553af6a1fa77e"} Feb 19 05:41:50 crc kubenswrapper[5012]: I0219 05:41:50.228125 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:50 crc kubenswrapper[5012]: I0219 05:41:50.239906 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.339218161 podStartE2EDuration="3.239887005s" podCreationTimestamp="2026-02-19 05:41:47 +0000 UTC" firstStartedPulling="2026-02-19 05:41:48.535813193 +0000 UTC m=+1004.569135762" lastFinishedPulling="2026-02-19 05:41:49.436482037 +0000 UTC m=+1005.469804606" observedRunningTime="2026-02-19 05:41:50.239567437 +0000 UTC m=+1006.272889996" watchObservedRunningTime="2026-02-19 05:41:50.239887005 +0000 UTC m=+1006.273209574" Feb 19 05:41:50 crc kubenswrapper[5012]: I0219 05:41:50.715281 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11b0e720-e74b-43f8-b8f3-207b35594187" path="/var/lib/kubelet/pods/11b0e720-e74b-43f8-b8f3-207b35594187/volumes" Feb 19 05:41:50 crc kubenswrapper[5012]: I0219 05:41:50.716451 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87a37a10-9d54-42b4-b1ec-a841d2836207" path="/var/lib/kubelet/pods/87a37a10-9d54-42b4-b1ec-a841d2836207/volumes" Feb 19 05:41:50 crc kubenswrapper[5012]: I0219 05:41:50.738413 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" podStartSLOduration=3.7383896720000003 podStartE2EDuration="3.738389672s" podCreationTimestamp="2026-02-19 05:41:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:41:50.271877309 +0000 UTC m=+1006.305199878" watchObservedRunningTime="2026-02-19 05:41:50.738389672 +0000 UTC m=+1006.771712251" Feb 19 05:41:51 crc kubenswrapper[5012]: I0219 05:41:51.797269 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:51 crc kubenswrapper[5012]: E0219 05:41:51.797469 5012 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 05:41:51 crc kubenswrapper[5012]: E0219 05:41:51.797728 5012 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 05:41:51 crc kubenswrapper[5012]: E0219 05:41:51.797799 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift podName:c089afc3-1655-4675-b4e1-a62ec6929498 nodeName:}" failed. No retries permitted until 2026-02-19 05:41:55.797774455 +0000 UTC m=+1011.831097114 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift") pod "swift-storage-0" (UID: "c089afc3-1655-4675-b4e1-a62ec6929498") : configmap "swift-ring-files" not found Feb 19 05:41:51 crc kubenswrapper[5012]: I0219 05:41:51.887751 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-5vxhd"] Feb 19 05:41:51 crc kubenswrapper[5012]: E0219 05:41:51.888073 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a37a10-9d54-42b4-b1ec-a841d2836207" containerName="init" Feb 19 05:41:51 crc kubenswrapper[5012]: I0219 05:41:51.888087 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a37a10-9d54-42b4-b1ec-a841d2836207" containerName="init" Feb 19 05:41:51 crc kubenswrapper[5012]: I0219 05:41:51.888285 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a37a10-9d54-42b4-b1ec-a841d2836207" containerName="init" Feb 19 05:41:51 crc kubenswrapper[5012]: I0219 05:41:51.888891 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:51 crc kubenswrapper[5012]: I0219 05:41:51.891106 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 19 05:41:51 crc kubenswrapper[5012]: I0219 05:41:51.891140 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 19 05:41:51 crc kubenswrapper[5012]: I0219 05:41:51.891115 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 05:41:51 crc kubenswrapper[5012]: I0219 05:41:51.928413 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5vxhd"] Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.001992 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szrf5\" (UniqueName: \"kubernetes.io/projected/d05da3bc-6c22-4956-9fab-331eed79d175-kube-api-access-szrf5\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.002032 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-dispersionconf\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.002224 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d05da3bc-6c22-4956-9fab-331eed79d175-etc-swift\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.002328 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-swiftconf\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.002354 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-scripts\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.002370 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-ring-data-devices\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.002455 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-combined-ca-bundle\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.104366 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-swiftconf\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.104415 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-scripts\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.104436 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-ring-data-devices\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.104474 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-combined-ca-bundle\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.104566 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szrf5\" (UniqueName: \"kubernetes.io/projected/d05da3bc-6c22-4956-9fab-331eed79d175-kube-api-access-szrf5\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.104585 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-dispersionconf\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.104652 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d05da3bc-6c22-4956-9fab-331eed79d175-etc-swift\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.105118 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d05da3bc-6c22-4956-9fab-331eed79d175-etc-swift\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.105711 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-ring-data-devices\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.105820 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-scripts\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.110789 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-combined-ca-bundle\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.113824 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-dispersionconf\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.114284 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-swiftconf\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.133221 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szrf5\" (UniqueName: \"kubernetes.io/projected/d05da3bc-6c22-4956-9fab-331eed79d175-kube-api-access-szrf5\") pod \"swift-ring-rebalance-5vxhd\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.208537 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.253282 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a13d3004-2045-4daf-a925-7eccf541b1b4","Type":"ContainerStarted","Data":"0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e"} Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.660071 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.660524 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 19 05:41:52 crc kubenswrapper[5012]: I0219 05:41:52.691508 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5vxhd"] Feb 19 05:41:53 crc kubenswrapper[5012]: I0219 05:41:53.331369 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 19 05:41:53 crc kubenswrapper[5012]: I0219 05:41:53.468566 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.399988 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.400043 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.525324 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.910636 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-r8ddf"] Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.911674 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r8ddf" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.923589 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-91bd-account-create-update-54r7l"] Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.924922 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-91bd-account-create-update-54r7l" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.926407 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.934359 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-r8ddf"] Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.950258 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90a75d3b-186a-41d6-92a8-94729c520aa5-operator-scripts\") pod \"glance-91bd-account-create-update-54r7l\" (UID: \"90a75d3b-186a-41d6-92a8-94729c520aa5\") " pod="openstack/glance-91bd-account-create-update-54r7l" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.950352 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mplk\" (UniqueName: \"kubernetes.io/projected/90a75d3b-186a-41d6-92a8-94729c520aa5-kube-api-access-4mplk\") pod \"glance-91bd-account-create-update-54r7l\" (UID: \"90a75d3b-186a-41d6-92a8-94729c520aa5\") " pod="openstack/glance-91bd-account-create-update-54r7l" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.950396 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1e3020d-901d-4649-9e94-c5c0a4cc523d-operator-scripts\") pod \"glance-db-create-r8ddf\" (UID: \"e1e3020d-901d-4649-9e94-c5c0a4cc523d\") " pod="openstack/glance-db-create-r8ddf" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.950434 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sttkj\" (UniqueName: \"kubernetes.io/projected/e1e3020d-901d-4649-9e94-c5c0a4cc523d-kube-api-access-sttkj\") pod \"glance-db-create-r8ddf\" (UID: \"e1e3020d-901d-4649-9e94-c5c0a4cc523d\") " pod="openstack/glance-db-create-r8ddf" Feb 19 05:41:54 crc kubenswrapper[5012]: I0219 05:41:54.953568 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-91bd-account-create-update-54r7l"] Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.051738 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sttkj\" (UniqueName: \"kubernetes.io/projected/e1e3020d-901d-4649-9e94-c5c0a4cc523d-kube-api-access-sttkj\") pod \"glance-db-create-r8ddf\" (UID: \"e1e3020d-901d-4649-9e94-c5c0a4cc523d\") " pod="openstack/glance-db-create-r8ddf" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.051836 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90a75d3b-186a-41d6-92a8-94729c520aa5-operator-scripts\") pod \"glance-91bd-account-create-update-54r7l\" (UID: \"90a75d3b-186a-41d6-92a8-94729c520aa5\") " pod="openstack/glance-91bd-account-create-update-54r7l" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.051921 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mplk\" (UniqueName: \"kubernetes.io/projected/90a75d3b-186a-41d6-92a8-94729c520aa5-kube-api-access-4mplk\") pod \"glance-91bd-account-create-update-54r7l\" (UID: \"90a75d3b-186a-41d6-92a8-94729c520aa5\") " pod="openstack/glance-91bd-account-create-update-54r7l" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.051985 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1e3020d-901d-4649-9e94-c5c0a4cc523d-operator-scripts\") pod \"glance-db-create-r8ddf\" (UID: \"e1e3020d-901d-4649-9e94-c5c0a4cc523d\") " pod="openstack/glance-db-create-r8ddf" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.052690 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90a75d3b-186a-41d6-92a8-94729c520aa5-operator-scripts\") pod \"glance-91bd-account-create-update-54r7l\" (UID: \"90a75d3b-186a-41d6-92a8-94729c520aa5\") " pod="openstack/glance-91bd-account-create-update-54r7l" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.053830 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1e3020d-901d-4649-9e94-c5c0a4cc523d-operator-scripts\") pod \"glance-db-create-r8ddf\" (UID: \"e1e3020d-901d-4649-9e94-c5c0a4cc523d\") " pod="openstack/glance-db-create-r8ddf" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.068581 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sttkj\" (UniqueName: \"kubernetes.io/projected/e1e3020d-901d-4649-9e94-c5c0a4cc523d-kube-api-access-sttkj\") pod \"glance-db-create-r8ddf\" (UID: \"e1e3020d-901d-4649-9e94-c5c0a4cc523d\") " pod="openstack/glance-db-create-r8ddf" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.075950 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mplk\" (UniqueName: \"kubernetes.io/projected/90a75d3b-186a-41d6-92a8-94729c520aa5-kube-api-access-4mplk\") pod \"glance-91bd-account-create-update-54r7l\" (UID: \"90a75d3b-186a-41d6-92a8-94729c520aa5\") " pod="openstack/glance-91bd-account-create-update-54r7l" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.230107 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r8ddf" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.241362 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-91bd-account-create-update-54r7l" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.312027 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5vxhd" event={"ID":"d05da3bc-6c22-4956-9fab-331eed79d175","Type":"ContainerStarted","Data":"474f2d807b97d130f773ae47927296219b201d325e7ae32ec13971a56bf04456"} Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.514433 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.572344 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-jktc7"] Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.574108 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jktc7" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.578280 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jktc7"] Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.663942 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-operator-scripts\") pod \"keystone-db-create-jktc7\" (UID: \"12f3008a-413a-4fe7-b3c1-773c10b6b2bf\") " pod="openstack/keystone-db-create-jktc7" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.663987 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28csb\" (UniqueName: \"kubernetes.io/projected/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-kube-api-access-28csb\") pod \"keystone-db-create-jktc7\" (UID: \"12f3008a-413a-4fe7-b3c1-773c10b6b2bf\") " pod="openstack/keystone-db-create-jktc7" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.665569 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b5f0-account-create-update-l7b8m"] Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.677731 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b5f0-account-create-update-l7b8m"] Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.677823 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b5f0-account-create-update-l7b8m" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.684241 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.758316 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-hthfx"] Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.759883 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hthfx" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.767165 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-operator-scripts\") pod \"keystone-db-create-jktc7\" (UID: \"12f3008a-413a-4fe7-b3c1-773c10b6b2bf\") " pod="openstack/keystone-db-create-jktc7" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.767202 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28csb\" (UniqueName: \"kubernetes.io/projected/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-kube-api-access-28csb\") pod \"keystone-db-create-jktc7\" (UID: \"12f3008a-413a-4fe7-b3c1-773c10b6b2bf\") " pod="openstack/keystone-db-create-jktc7" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.775583 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-operator-scripts\") pod \"keystone-db-create-jktc7\" (UID: \"12f3008a-413a-4fe7-b3c1-773c10b6b2bf\") " pod="openstack/keystone-db-create-jktc7" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.783161 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hthfx"] Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.797760 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28csb\" (UniqueName: \"kubernetes.io/projected/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-kube-api-access-28csb\") pod \"keystone-db-create-jktc7\" (UID: \"12f3008a-413a-4fe7-b3c1-773c10b6b2bf\") " pod="openstack/keystone-db-create-jktc7" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.860353 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-22e2-account-create-update-vddht"] Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.864940 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-22e2-account-create-update-vddht" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.869412 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt2lm\" (UniqueName: \"kubernetes.io/projected/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-kube-api-access-nt2lm\") pod \"placement-db-create-hthfx\" (UID: \"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a\") " pod="openstack/placement-db-create-hthfx" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.869763 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.870670 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqkd8\" (UniqueName: \"kubernetes.io/projected/533d4699-332c-4ceb-ad6e-77c680699214-kube-api-access-fqkd8\") pod \"keystone-b5f0-account-create-update-l7b8m\" (UID: \"533d4699-332c-4ceb-ad6e-77c680699214\") " pod="openstack/keystone-b5f0-account-create-update-l7b8m" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.870700 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/533d4699-332c-4ceb-ad6e-77c680699214-operator-scripts\") pod \"keystone-b5f0-account-create-update-l7b8m\" (UID: \"533d4699-332c-4ceb-ad6e-77c680699214\") " pod="openstack/keystone-b5f0-account-create-update-l7b8m" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.870727 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-operator-scripts\") pod \"placement-db-create-hthfx\" (UID: \"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a\") " pod="openstack/placement-db-create-hthfx" Feb 19 05:41:55 crc kubenswrapper[5012]: E0219 05:41:55.870862 5012 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 05:41:55 crc kubenswrapper[5012]: E0219 05:41:55.870875 5012 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 05:41:55 crc kubenswrapper[5012]: E0219 05:41:55.870911 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift podName:c089afc3-1655-4675-b4e1-a62ec6929498 nodeName:}" failed. No retries permitted until 2026-02-19 05:42:03.870898151 +0000 UTC m=+1019.904220720 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift") pod "swift-storage-0" (UID: "c089afc3-1655-4675-b4e1-a62ec6929498") : configmap "swift-ring-files" not found Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.875378 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-22e2-account-create-update-vddht"] Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.882595 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.907646 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jktc7" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.934856 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-r8ddf"] Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.973244 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kq4b\" (UniqueName: \"kubernetes.io/projected/d1e7d95a-d78a-4d54-a66b-565114b4823e-kube-api-access-5kq4b\") pod \"placement-22e2-account-create-update-vddht\" (UID: \"d1e7d95a-d78a-4d54-a66b-565114b4823e\") " pod="openstack/placement-22e2-account-create-update-vddht" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.973330 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqkd8\" (UniqueName: \"kubernetes.io/projected/533d4699-332c-4ceb-ad6e-77c680699214-kube-api-access-fqkd8\") pod \"keystone-b5f0-account-create-update-l7b8m\" (UID: \"533d4699-332c-4ceb-ad6e-77c680699214\") " pod="openstack/keystone-b5f0-account-create-update-l7b8m" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.973361 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/533d4699-332c-4ceb-ad6e-77c680699214-operator-scripts\") pod \"keystone-b5f0-account-create-update-l7b8m\" (UID: \"533d4699-332c-4ceb-ad6e-77c680699214\") " pod="openstack/keystone-b5f0-account-create-update-l7b8m" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.973383 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1e7d95a-d78a-4d54-a66b-565114b4823e-operator-scripts\") pod \"placement-22e2-account-create-update-vddht\" (UID: \"d1e7d95a-d78a-4d54-a66b-565114b4823e\") " pod="openstack/placement-22e2-account-create-update-vddht" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.973407 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-operator-scripts\") pod \"placement-db-create-hthfx\" (UID: \"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a\") " pod="openstack/placement-db-create-hthfx" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.973473 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt2lm\" (UniqueName: \"kubernetes.io/projected/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-kube-api-access-nt2lm\") pod \"placement-db-create-hthfx\" (UID: \"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a\") " pod="openstack/placement-db-create-hthfx" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.974356 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-operator-scripts\") pod \"placement-db-create-hthfx\" (UID: \"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a\") " pod="openstack/placement-db-create-hthfx" Feb 19 05:41:55 crc kubenswrapper[5012]: I0219 05:41:55.974641 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/533d4699-332c-4ceb-ad6e-77c680699214-operator-scripts\") pod \"keystone-b5f0-account-create-update-l7b8m\" (UID: \"533d4699-332c-4ceb-ad6e-77c680699214\") " pod="openstack/keystone-b5f0-account-create-update-l7b8m" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:55.999992 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt2lm\" (UniqueName: \"kubernetes.io/projected/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-kube-api-access-nt2lm\") pod \"placement-db-create-hthfx\" (UID: \"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a\") " pod="openstack/placement-db-create-hthfx" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.003839 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqkd8\" (UniqueName: \"kubernetes.io/projected/533d4699-332c-4ceb-ad6e-77c680699214-kube-api-access-fqkd8\") pod \"keystone-b5f0-account-create-update-l7b8m\" (UID: \"533d4699-332c-4ceb-ad6e-77c680699214\") " pod="openstack/keystone-b5f0-account-create-update-l7b8m" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.019954 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-91bd-account-create-update-54r7l"] Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.075598 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kq4b\" (UniqueName: \"kubernetes.io/projected/d1e7d95a-d78a-4d54-a66b-565114b4823e-kube-api-access-5kq4b\") pod \"placement-22e2-account-create-update-vddht\" (UID: \"d1e7d95a-d78a-4d54-a66b-565114b4823e\") " pod="openstack/placement-22e2-account-create-update-vddht" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.076139 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1e7d95a-d78a-4d54-a66b-565114b4823e-operator-scripts\") pod \"placement-22e2-account-create-update-vddht\" (UID: \"d1e7d95a-d78a-4d54-a66b-565114b4823e\") " pod="openstack/placement-22e2-account-create-update-vddht" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.076783 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1e7d95a-d78a-4d54-a66b-565114b4823e-operator-scripts\") pod \"placement-22e2-account-create-update-vddht\" (UID: \"d1e7d95a-d78a-4d54-a66b-565114b4823e\") " pod="openstack/placement-22e2-account-create-update-vddht" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.106207 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hthfx" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.130205 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kq4b\" (UniqueName: \"kubernetes.io/projected/d1e7d95a-d78a-4d54-a66b-565114b4823e-kube-api-access-5kq4b\") pod \"placement-22e2-account-create-update-vddht\" (UID: \"d1e7d95a-d78a-4d54-a66b-565114b4823e\") " pod="openstack/placement-22e2-account-create-update-vddht" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.184384 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-22e2-account-create-update-vddht" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.296473 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b5f0-account-create-update-l7b8m" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.341662 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e31edbd-c20b-420d-8888-cafb392410cd","Type":"ContainerStarted","Data":"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c"} Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.343312 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-r8ddf" event={"ID":"e1e3020d-901d-4649-9e94-c5c0a4cc523d","Type":"ContainerStarted","Data":"41cb74d66ab3e64634057788877cc78c4b6583899219dd015cbf188304216e08"} Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.345019 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-91bd-account-create-update-54r7l" event={"ID":"90a75d3b-186a-41d6-92a8-94729c520aa5","Type":"ContainerStarted","Data":"2cc8742fd7eb09f99450e71f46c7d9913eee7444573c215e9049c6c4deb3c4af"} Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.452272 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jktc7"] Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.561721 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-22e2-account-create-update-vddht"] Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.578391 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hthfx"] Feb 19 05:41:56 crc kubenswrapper[5012]: W0219 05:41:56.589686 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1e7d95a_d78a_4d54_a66b_565114b4823e.slice/crio-415948d38f4e1e498d619eb6a6b2469946c3c60046c19bbad1963803a9a9ee0e WatchSource:0}: Error finding container 415948d38f4e1e498d619eb6a6b2469946c3c60046c19bbad1963803a9a9ee0e: Status 404 returned error can't find the container with id 415948d38f4e1e498d619eb6a6b2469946c3c60046c19bbad1963803a9a9ee0e Feb 19 05:41:56 crc kubenswrapper[5012]: W0219 05:41:56.595448 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f5d1fc5_7a37_4ed2_86d6_7e0689c7b65a.slice/crio-32ae99cc9db8c5e5c207480404d461eddd26622f90d7889b9a998d9df04ee55b WatchSource:0}: Error finding container 32ae99cc9db8c5e5c207480404d461eddd26622f90d7889b9a998d9df04ee55b: Status 404 returned error can't find the container with id 32ae99cc9db8c5e5c207480404d461eddd26622f90d7889b9a998d9df04ee55b Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.751990 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-vjzm9"] Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.752942 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-vjzm9"] Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.753022 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-vjzm9" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.826525 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-34a7-account-create-update-84f2g"] Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.829142 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-34a7-account-create-update-84f2g" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.835103 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.845016 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-34a7-account-create-update-84f2g"] Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.877375 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b5f0-account-create-update-l7b8m"] Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.891253 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l9wv\" (UniqueName: \"kubernetes.io/projected/a973520b-997d-4c23-a056-590c96123e43-kube-api-access-4l9wv\") pod \"watcher-db-create-vjzm9\" (UID: \"a973520b-997d-4c23-a056-590c96123e43\") " pod="openstack/watcher-db-create-vjzm9" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.891345 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a973520b-997d-4c23-a056-590c96123e43-operator-scripts\") pod \"watcher-db-create-vjzm9\" (UID: \"a973520b-997d-4c23-a056-590c96123e43\") " pod="openstack/watcher-db-create-vjzm9" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.992704 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e45e098-f689-4015-9871-5f66e5d7bef1-operator-scripts\") pod \"watcher-34a7-account-create-update-84f2g\" (UID: \"6e45e098-f689-4015-9871-5f66e5d7bef1\") " pod="openstack/watcher-34a7-account-create-update-84f2g" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.992786 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld6zd\" (UniqueName: \"kubernetes.io/projected/6e45e098-f689-4015-9871-5f66e5d7bef1-kube-api-access-ld6zd\") pod \"watcher-34a7-account-create-update-84f2g\" (UID: \"6e45e098-f689-4015-9871-5f66e5d7bef1\") " pod="openstack/watcher-34a7-account-create-update-84f2g" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.992832 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l9wv\" (UniqueName: \"kubernetes.io/projected/a973520b-997d-4c23-a056-590c96123e43-kube-api-access-4l9wv\") pod \"watcher-db-create-vjzm9\" (UID: \"a973520b-997d-4c23-a056-590c96123e43\") " pod="openstack/watcher-db-create-vjzm9" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.992881 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a973520b-997d-4c23-a056-590c96123e43-operator-scripts\") pod \"watcher-db-create-vjzm9\" (UID: \"a973520b-997d-4c23-a056-590c96123e43\") " pod="openstack/watcher-db-create-vjzm9" Feb 19 05:41:56 crc kubenswrapper[5012]: I0219 05:41:56.993607 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a973520b-997d-4c23-a056-590c96123e43-operator-scripts\") pod \"watcher-db-create-vjzm9\" (UID: \"a973520b-997d-4c23-a056-590c96123e43\") " pod="openstack/watcher-db-create-vjzm9" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.094177 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e45e098-f689-4015-9871-5f66e5d7bef1-operator-scripts\") pod \"watcher-34a7-account-create-update-84f2g\" (UID: \"6e45e098-f689-4015-9871-5f66e5d7bef1\") " pod="openstack/watcher-34a7-account-create-update-84f2g" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.094330 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld6zd\" (UniqueName: \"kubernetes.io/projected/6e45e098-f689-4015-9871-5f66e5d7bef1-kube-api-access-ld6zd\") pod \"watcher-34a7-account-create-update-84f2g\" (UID: \"6e45e098-f689-4015-9871-5f66e5d7bef1\") " pod="openstack/watcher-34a7-account-create-update-84f2g" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.095609 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e45e098-f689-4015-9871-5f66e5d7bef1-operator-scripts\") pod \"watcher-34a7-account-create-update-84f2g\" (UID: \"6e45e098-f689-4015-9871-5f66e5d7bef1\") " pod="openstack/watcher-34a7-account-create-update-84f2g" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.126924 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld6zd\" (UniqueName: \"kubernetes.io/projected/6e45e098-f689-4015-9871-5f66e5d7bef1-kube-api-access-ld6zd\") pod \"watcher-34a7-account-create-update-84f2g\" (UID: \"6e45e098-f689-4015-9871-5f66e5d7bef1\") " pod="openstack/watcher-34a7-account-create-update-84f2g" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.134028 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l9wv\" (UniqueName: \"kubernetes.io/projected/a973520b-997d-4c23-a056-590c96123e43-kube-api-access-4l9wv\") pod \"watcher-db-create-vjzm9\" (UID: \"a973520b-997d-4c23-a056-590c96123e43\") " pod="openstack/watcher-db-create-vjzm9" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.357196 5012 generic.go:334] "Generic (PLEG): container finished" podID="6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a" containerID="573a87d5e8e95277642af154eba731e6d506fbe9be8db1436f41349ffe7bcbd4" exitCode=0 Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.357269 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hthfx" event={"ID":"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a","Type":"ContainerDied","Data":"573a87d5e8e95277642af154eba731e6d506fbe9be8db1436f41349ffe7bcbd4"} Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.357330 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hthfx" event={"ID":"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a","Type":"ContainerStarted","Data":"32ae99cc9db8c5e5c207480404d461eddd26622f90d7889b9a998d9df04ee55b"} Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.359537 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jktc7" event={"ID":"12f3008a-413a-4fe7-b3c1-773c10b6b2bf","Type":"ContainerStarted","Data":"7eb12edfddf61f27034bd898f26189ecda10cab4ae6f1560fd50a310988165c4"} Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.362259 5012 generic.go:334] "Generic (PLEG): container finished" podID="e1e3020d-901d-4649-9e94-c5c0a4cc523d" containerID="65e190912c6d7142d01553a587f58e32095a3f893daa4d06beb98e431777939c" exitCode=0 Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.362336 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-r8ddf" event={"ID":"e1e3020d-901d-4649-9e94-c5c0a4cc523d","Type":"ContainerDied","Data":"65e190912c6d7142d01553a587f58e32095a3f893daa4d06beb98e431777939c"} Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.364577 5012 generic.go:334] "Generic (PLEG): container finished" podID="90a75d3b-186a-41d6-92a8-94729c520aa5" containerID="0ba4832ef5cde65c22a33ecfff620cd13c71e947e2063a45381a8045e3407918" exitCode=0 Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.364633 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-91bd-account-create-update-54r7l" event={"ID":"90a75d3b-186a-41d6-92a8-94729c520aa5","Type":"ContainerDied","Data":"0ba4832ef5cde65c22a33ecfff620cd13c71e947e2063a45381a8045e3407918"} Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.366104 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-22e2-account-create-update-vddht" event={"ID":"d1e7d95a-d78a-4d54-a66b-565114b4823e","Type":"ContainerStarted","Data":"415948d38f4e1e498d619eb6a6b2469946c3c60046c19bbad1963803a9a9ee0e"} Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.367868 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b5f0-account-create-update-l7b8m" event={"ID":"533d4699-332c-4ceb-ad6e-77c680699214","Type":"ContainerStarted","Data":"6000ce41874befebf1b7c7cc7cbf4ce7340ce07971239d672500e2598326f86a"} Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.368855 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-vjzm9" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.369724 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"3c628866-f96d-4e7b-8846-7073c98dd389","Type":"ContainerStarted","Data":"39447df96b54f1be84a97ec4a361863f1bba8e92bceec140937b025ac768a708"} Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.386658 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-34a7-account-create-update-84f2g" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.447602 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-jktc7" podStartSLOduration=2.447580962 podStartE2EDuration="2.447580962s" podCreationTimestamp="2026-02-19 05:41:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:41:57.443079138 +0000 UTC m=+1013.476401717" watchObservedRunningTime="2026-02-19 05:41:57.447580962 +0000 UTC m=+1013.480903531" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.473288 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-22e2-account-create-update-vddht" podStartSLOduration=2.473265327 podStartE2EDuration="2.473265327s" podCreationTimestamp="2026-02-19 05:41:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:41:57.460218879 +0000 UTC m=+1013.493541488" watchObservedRunningTime="2026-02-19 05:41:57.473265327 +0000 UTC m=+1013.506587906" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.860507 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.925078 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f7d487d45-bvz4n"] Feb 19 05:41:57 crc kubenswrapper[5012]: I0219 05:41:57.925328 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" podUID="79e01828-7818-4fe8-bd3f-8d39e9bf939c" containerName="dnsmasq-dns" containerID="cri-o://8f0dc1aa57e08411f9d0f619e65ecab31defd41e57bdd287ce850d95e5dc2423" gracePeriod=10 Feb 19 05:41:58 crc kubenswrapper[5012]: I0219 05:41:58.379873 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e31edbd-c20b-420d-8888-cafb392410cd","Type":"ContainerStarted","Data":"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0"} Feb 19 05:41:58 crc kubenswrapper[5012]: I0219 05:41:58.381805 5012 generic.go:334] "Generic (PLEG): container finished" podID="79e01828-7818-4fe8-bd3f-8d39e9bf939c" containerID="8f0dc1aa57e08411f9d0f619e65ecab31defd41e57bdd287ce850d95e5dc2423" exitCode=0 Feb 19 05:41:58 crc kubenswrapper[5012]: I0219 05:41:58.381870 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" event={"ID":"79e01828-7818-4fe8-bd3f-8d39e9bf939c","Type":"ContainerDied","Data":"8f0dc1aa57e08411f9d0f619e65ecab31defd41e57bdd287ce850d95e5dc2423"} Feb 19 05:41:58 crc kubenswrapper[5012]: I0219 05:41:58.383266 5012 generic.go:334] "Generic (PLEG): container finished" podID="12f3008a-413a-4fe7-b3c1-773c10b6b2bf" containerID="c98bff27bc9812d723f9217b691c091425289e0f299460c4c4e1c7163b359d43" exitCode=0 Feb 19 05:41:58 crc kubenswrapper[5012]: I0219 05:41:58.383328 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jktc7" event={"ID":"12f3008a-413a-4fe7-b3c1-773c10b6b2bf","Type":"ContainerDied","Data":"c98bff27bc9812d723f9217b691c091425289e0f299460c4c4e1c7163b359d43"} Feb 19 05:41:58 crc kubenswrapper[5012]: I0219 05:41:58.385603 5012 generic.go:334] "Generic (PLEG): container finished" podID="d1e7d95a-d78a-4d54-a66b-565114b4823e" containerID="e1dc1ea6e87e48e7096bcfb12892dc9ac8929ba2984948549033f17095a5c4d5" exitCode=0 Feb 19 05:41:58 crc kubenswrapper[5012]: I0219 05:41:58.385773 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-22e2-account-create-update-vddht" event={"ID":"d1e7d95a-d78a-4d54-a66b-565114b4823e","Type":"ContainerDied","Data":"e1dc1ea6e87e48e7096bcfb12892dc9ac8929ba2984948549033f17095a5c4d5"} Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.311964 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-d9g9k"] Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.315977 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d9g9k" Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.319955 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.323149 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d9g9k"] Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.396331 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff1c217f-b6fa-482c-ad1b-5168cb882283-operator-scripts\") pod \"root-account-create-update-d9g9k\" (UID: \"ff1c217f-b6fa-482c-ad1b-5168cb882283\") " pod="openstack/root-account-create-update-d9g9k" Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.396456 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blfg6\" (UniqueName: \"kubernetes.io/projected/ff1c217f-b6fa-482c-ad1b-5168cb882283-kube-api-access-blfg6\") pod \"root-account-create-update-d9g9k\" (UID: \"ff1c217f-b6fa-482c-ad1b-5168cb882283\") " pod="openstack/root-account-create-update-d9g9k" Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.501211 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blfg6\" (UniqueName: \"kubernetes.io/projected/ff1c217f-b6fa-482c-ad1b-5168cb882283-kube-api-access-blfg6\") pod \"root-account-create-update-d9g9k\" (UID: \"ff1c217f-b6fa-482c-ad1b-5168cb882283\") " pod="openstack/root-account-create-update-d9g9k" Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.501400 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff1c217f-b6fa-482c-ad1b-5168cb882283-operator-scripts\") pod \"root-account-create-update-d9g9k\" (UID: \"ff1c217f-b6fa-482c-ad1b-5168cb882283\") " pod="openstack/root-account-create-update-d9g9k" Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.502422 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff1c217f-b6fa-482c-ad1b-5168cb882283-operator-scripts\") pod \"root-account-create-update-d9g9k\" (UID: \"ff1c217f-b6fa-482c-ad1b-5168cb882283\") " pod="openstack/root-account-create-update-d9g9k" Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.532685 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blfg6\" (UniqueName: \"kubernetes.io/projected/ff1c217f-b6fa-482c-ad1b-5168cb882283-kube-api-access-blfg6\") pod \"root-account-create-update-d9g9k\" (UID: \"ff1c217f-b6fa-482c-ad1b-5168cb882283\") " pod="openstack/root-account-create-update-d9g9k" Feb 19 05:42:01 crc kubenswrapper[5012]: I0219 05:42:01.641224 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d9g9k" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.440796 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-91bd-account-create-update-54r7l" event={"ID":"90a75d3b-186a-41d6-92a8-94729c520aa5","Type":"ContainerDied","Data":"2cc8742fd7eb09f99450e71f46c7d9913eee7444573c215e9049c6c4deb3c4af"} Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.441078 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cc8742fd7eb09f99450e71f46c7d9913eee7444573c215e9049c6c4deb3c4af" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.452660 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-22e2-account-create-update-vddht" event={"ID":"d1e7d95a-d78a-4d54-a66b-565114b4823e","Type":"ContainerDied","Data":"415948d38f4e1e498d619eb6a6b2469946c3c60046c19bbad1963803a9a9ee0e"} Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.452687 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="415948d38f4e1e498d619eb6a6b2469946c3c60046c19bbad1963803a9a9ee0e" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.461883 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hthfx" event={"ID":"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a","Type":"ContainerDied","Data":"32ae99cc9db8c5e5c207480404d461eddd26622f90d7889b9a998d9df04ee55b"} Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.461927 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32ae99cc9db8c5e5c207480404d461eddd26622f90d7889b9a998d9df04ee55b" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.507151 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" event={"ID":"79e01828-7818-4fe8-bd3f-8d39e9bf939c","Type":"ContainerDied","Data":"4cb41e822d4dbb1861f13461a8bcb5e410e5b409d268141b4a6e8e97a369da40"} Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.507187 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cb41e822d4dbb1861f13461a8bcb5e410e5b409d268141b4a6e8e97a369da40" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.523473 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jktc7" event={"ID":"12f3008a-413a-4fe7-b3c1-773c10b6b2bf","Type":"ContainerDied","Data":"7eb12edfddf61f27034bd898f26189ecda10cab4ae6f1560fd50a310988165c4"} Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.523771 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7eb12edfddf61f27034bd898f26189ecda10cab4ae6f1560fd50a310988165c4" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.538512 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hthfx" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.539771 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-r8ddf" event={"ID":"e1e3020d-901d-4649-9e94-c5c0a4cc523d","Type":"ContainerDied","Data":"41cb74d66ab3e64634057788877cc78c4b6583899219dd015cbf188304216e08"} Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.539805 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41cb74d66ab3e64634057788877cc78c4b6583899219dd015cbf188304216e08" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.540595 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.641461 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt2lm\" (UniqueName: \"kubernetes.io/projected/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-kube-api-access-nt2lm\") pod \"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a\" (UID: \"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.641783 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-config\") pod \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.641846 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcg6n\" (UniqueName: \"kubernetes.io/projected/79e01828-7818-4fe8-bd3f-8d39e9bf939c-kube-api-access-mcg6n\") pod \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.641871 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-operator-scripts\") pod \"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a\" (UID: \"6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.641899 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-dns-svc\") pod \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\" (UID: \"79e01828-7818-4fe8-bd3f-8d39e9bf939c\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.642637 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a" (UID: "6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.649414 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e01828-7818-4fe8-bd3f-8d39e9bf939c-kube-api-access-mcg6n" (OuterVolumeSpecName: "kube-api-access-mcg6n") pod "79e01828-7818-4fe8-bd3f-8d39e9bf939c" (UID: "79e01828-7818-4fe8-bd3f-8d39e9bf939c"). InnerVolumeSpecName "kube-api-access-mcg6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.649468 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-kube-api-access-nt2lm" (OuterVolumeSpecName: "kube-api-access-nt2lm") pod "6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a" (UID: "6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a"). InnerVolumeSpecName "kube-api-access-nt2lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.689476 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-config" (OuterVolumeSpecName: "config") pod "79e01828-7818-4fe8-bd3f-8d39e9bf939c" (UID: "79e01828-7818-4fe8-bd3f-8d39e9bf939c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.690090 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "79e01828-7818-4fe8-bd3f-8d39e9bf939c" (UID: "79e01828-7818-4fe8-bd3f-8d39e9bf939c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.743720 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt2lm\" (UniqueName: \"kubernetes.io/projected/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-kube-api-access-nt2lm\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.743749 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.743788 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcg6n\" (UniqueName: \"kubernetes.io/projected/79e01828-7818-4fe8-bd3f-8d39e9bf939c-kube-api-access-mcg6n\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.743800 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.743811 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79e01828-7818-4fe8-bd3f-8d39e9bf939c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.787376 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-91bd-account-create-update-54r7l" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.813051 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r8ddf" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.828414 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jktc7" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.858389 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-22e2-account-create-update-vddht" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.947150 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sttkj\" (UniqueName: \"kubernetes.io/projected/e1e3020d-901d-4649-9e94-c5c0a4cc523d-kube-api-access-sttkj\") pod \"e1e3020d-901d-4649-9e94-c5c0a4cc523d\" (UID: \"e1e3020d-901d-4649-9e94-c5c0a4cc523d\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.947258 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28csb\" (UniqueName: \"kubernetes.io/projected/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-kube-api-access-28csb\") pod \"12f3008a-413a-4fe7-b3c1-773c10b6b2bf\" (UID: \"12f3008a-413a-4fe7-b3c1-773c10b6b2bf\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.947336 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1e3020d-901d-4649-9e94-c5c0a4cc523d-operator-scripts\") pod \"e1e3020d-901d-4649-9e94-c5c0a4cc523d\" (UID: \"e1e3020d-901d-4649-9e94-c5c0a4cc523d\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.947362 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mplk\" (UniqueName: \"kubernetes.io/projected/90a75d3b-186a-41d6-92a8-94729c520aa5-kube-api-access-4mplk\") pod \"90a75d3b-186a-41d6-92a8-94729c520aa5\" (UID: \"90a75d3b-186a-41d6-92a8-94729c520aa5\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.947425 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90a75d3b-186a-41d6-92a8-94729c520aa5-operator-scripts\") pod \"90a75d3b-186a-41d6-92a8-94729c520aa5\" (UID: \"90a75d3b-186a-41d6-92a8-94729c520aa5\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.947534 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-operator-scripts\") pod \"12f3008a-413a-4fe7-b3c1-773c10b6b2bf\" (UID: \"12f3008a-413a-4fe7-b3c1-773c10b6b2bf\") " Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.948640 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e3020d-901d-4649-9e94-c5c0a4cc523d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1e3020d-901d-4649-9e94-c5c0a4cc523d" (UID: "e1e3020d-901d-4649-9e94-c5c0a4cc523d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.948698 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90a75d3b-186a-41d6-92a8-94729c520aa5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90a75d3b-186a-41d6-92a8-94729c520aa5" (UID: "90a75d3b-186a-41d6-92a8-94729c520aa5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.948712 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12f3008a-413a-4fe7-b3c1-773c10b6b2bf" (UID: "12f3008a-413a-4fe7-b3c1-773c10b6b2bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.952613 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90a75d3b-186a-41d6-92a8-94729c520aa5-kube-api-access-4mplk" (OuterVolumeSpecName: "kube-api-access-4mplk") pod "90a75d3b-186a-41d6-92a8-94729c520aa5" (UID: "90a75d3b-186a-41d6-92a8-94729c520aa5"). InnerVolumeSpecName "kube-api-access-4mplk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.952728 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-kube-api-access-28csb" (OuterVolumeSpecName: "kube-api-access-28csb") pod "12f3008a-413a-4fe7-b3c1-773c10b6b2bf" (UID: "12f3008a-413a-4fe7-b3c1-773c10b6b2bf"). InnerVolumeSpecName "kube-api-access-28csb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:02 crc kubenswrapper[5012]: I0219 05:42:02.953486 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e3020d-901d-4649-9e94-c5c0a4cc523d-kube-api-access-sttkj" (OuterVolumeSpecName: "kube-api-access-sttkj") pod "e1e3020d-901d-4649-9e94-c5c0a4cc523d" (UID: "e1e3020d-901d-4649-9e94-c5c0a4cc523d"). InnerVolumeSpecName "kube-api-access-sttkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.022174 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-34a7-account-create-update-84f2g"] Feb 19 05:42:03 crc kubenswrapper[5012]: W0219 05:42:03.028070 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff1c217f_b6fa_482c_ad1b_5168cb882283.slice/crio-975b4531e6ccb72f2f56ccef48caa1ef5291f994f247b8d40940642b67930d0b WatchSource:0}: Error finding container 975b4531e6ccb72f2f56ccef48caa1ef5291f994f247b8d40940642b67930d0b: Status 404 returned error can't find the container with id 975b4531e6ccb72f2f56ccef48caa1ef5291f994f247b8d40940642b67930d0b Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.031576 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d9g9k"] Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.048731 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kq4b\" (UniqueName: \"kubernetes.io/projected/d1e7d95a-d78a-4d54-a66b-565114b4823e-kube-api-access-5kq4b\") pod \"d1e7d95a-d78a-4d54-a66b-565114b4823e\" (UID: \"d1e7d95a-d78a-4d54-a66b-565114b4823e\") " Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.048813 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1e7d95a-d78a-4d54-a66b-565114b4823e-operator-scripts\") pod \"d1e7d95a-d78a-4d54-a66b-565114b4823e\" (UID: \"d1e7d95a-d78a-4d54-a66b-565114b4823e\") " Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.049454 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1e7d95a-d78a-4d54-a66b-565114b4823e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1e7d95a-d78a-4d54-a66b-565114b4823e" (UID: "d1e7d95a-d78a-4d54-a66b-565114b4823e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.049774 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1e3020d-901d-4649-9e94-c5c0a4cc523d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.049793 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mplk\" (UniqueName: \"kubernetes.io/projected/90a75d3b-186a-41d6-92a8-94729c520aa5-kube-api-access-4mplk\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.049819 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1e7d95a-d78a-4d54-a66b-565114b4823e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.049831 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90a75d3b-186a-41d6-92a8-94729c520aa5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.049839 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.049849 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sttkj\" (UniqueName: \"kubernetes.io/projected/e1e3020d-901d-4649-9e94-c5c0a4cc523d-kube-api-access-sttkj\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.049857 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28csb\" (UniqueName: \"kubernetes.io/projected/12f3008a-413a-4fe7-b3c1-773c10b6b2bf-kube-api-access-28csb\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.051472 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1e7d95a-d78a-4d54-a66b-565114b4823e-kube-api-access-5kq4b" (OuterVolumeSpecName: "kube-api-access-5kq4b") pod "d1e7d95a-d78a-4d54-a66b-565114b4823e" (UID: "d1e7d95a-d78a-4d54-a66b-565114b4823e"). InnerVolumeSpecName "kube-api-access-5kq4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.116426 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-vjzm9"] Feb 19 05:42:03 crc kubenswrapper[5012]: W0219 05:42:03.124358 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda973520b_997d_4c23_a056_590c96123e43.slice/crio-314360f6e925c772631000a7ca09cdf8d9f366b9b615304a27d678fd6c7a2d70 WatchSource:0}: Error finding container 314360f6e925c772631000a7ca09cdf8d9f366b9b615304a27d678fd6c7a2d70: Status 404 returned error can't find the container with id 314360f6e925c772631000a7ca09cdf8d9f366b9b615304a27d678fd6c7a2d70 Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.152093 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kq4b\" (UniqueName: \"kubernetes.io/projected/d1e7d95a-d78a-4d54-a66b-565114b4823e-kube-api-access-5kq4b\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.547854 5012 generic.go:334] "Generic (PLEG): container finished" podID="ff1c217f-b6fa-482c-ad1b-5168cb882283" containerID="abc0139cac003d44d29c14053f3981b5bda18d4f49ee4f01ff970a93700f4fc7" exitCode=0 Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.547917 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d9g9k" event={"ID":"ff1c217f-b6fa-482c-ad1b-5168cb882283","Type":"ContainerDied","Data":"abc0139cac003d44d29c14053f3981b5bda18d4f49ee4f01ff970a93700f4fc7"} Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.547943 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d9g9k" event={"ID":"ff1c217f-b6fa-482c-ad1b-5168cb882283","Type":"ContainerStarted","Data":"975b4531e6ccb72f2f56ccef48caa1ef5291f994f247b8d40940642b67930d0b"} Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.551335 5012 generic.go:334] "Generic (PLEG): container finished" podID="6e45e098-f689-4015-9871-5f66e5d7bef1" containerID="0da1732600a370cfbfe77664995408f2ab300c5ef7fcf22ab0fd4f379bf54473" exitCode=0 Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.551376 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-34a7-account-create-update-84f2g" event={"ID":"6e45e098-f689-4015-9871-5f66e5d7bef1","Type":"ContainerDied","Data":"0da1732600a370cfbfe77664995408f2ab300c5ef7fcf22ab0fd4f379bf54473"} Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.551401 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-34a7-account-create-update-84f2g" event={"ID":"6e45e098-f689-4015-9871-5f66e5d7bef1","Type":"ContainerStarted","Data":"be5862dc6f34b983db201be5afc0571b16d829c3169057121e4b42ea84e0b0c6"} Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.553180 5012 generic.go:334] "Generic (PLEG): container finished" podID="a973520b-997d-4c23-a056-590c96123e43" containerID="6cb45a4049590e4fb7d60e94e092be98bdb1a162fc286f8a8013620e8c330260" exitCode=0 Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.553232 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-vjzm9" event={"ID":"a973520b-997d-4c23-a056-590c96123e43","Type":"ContainerDied","Data":"6cb45a4049590e4fb7d60e94e092be98bdb1a162fc286f8a8013620e8c330260"} Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.553248 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-vjzm9" event={"ID":"a973520b-997d-4c23-a056-590c96123e43","Type":"ContainerStarted","Data":"314360f6e925c772631000a7ca09cdf8d9f366b9b615304a27d678fd6c7a2d70"} Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.555458 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5vxhd" event={"ID":"d05da3bc-6c22-4956-9fab-331eed79d175","Type":"ContainerStarted","Data":"c2e46f5b1d6014395f2cc0ca721ce5c1df3a8f677de34c3d64f89b616ca2d967"} Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.558545 5012 generic.go:334] "Generic (PLEG): container finished" podID="533d4699-332c-4ceb-ad6e-77c680699214" containerID="8d4101d8165775d3c785f3ad562d7ef71806f55866410c4f9e87581c5430851f" exitCode=0 Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.558635 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r8ddf" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.558645 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.558661 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-22e2-account-create-update-vddht" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.558679 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b5f0-account-create-update-l7b8m" event={"ID":"533d4699-332c-4ceb-ad6e-77c680699214","Type":"ContainerDied","Data":"8d4101d8165775d3c785f3ad562d7ef71806f55866410c4f9e87581c5430851f"} Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.558704 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jktc7" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.558727 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-91bd-account-create-update-54r7l" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.558731 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hthfx" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.599592 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-5vxhd" podStartSLOduration=5.444478055 podStartE2EDuration="12.59957206s" podCreationTimestamp="2026-02-19 05:41:51 +0000 UTC" firstStartedPulling="2026-02-19 05:41:55.294774343 +0000 UTC m=+1011.328096912" lastFinishedPulling="2026-02-19 05:42:02.449868348 +0000 UTC m=+1018.483190917" observedRunningTime="2026-02-19 05:42:03.593137668 +0000 UTC m=+1019.626460237" watchObservedRunningTime="2026-02-19 05:42:03.59957206 +0000 UTC m=+1019.632894629" Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.736033 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f7d487d45-bvz4n"] Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.745282 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f7d487d45-bvz4n"] Feb 19 05:42:03 crc kubenswrapper[5012]: I0219 05:42:03.871796 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:42:03 crc kubenswrapper[5012]: E0219 05:42:03.880246 5012 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 05:42:03 crc kubenswrapper[5012]: E0219 05:42:03.880289 5012 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 05:42:03 crc kubenswrapper[5012]: E0219 05:42:03.880382 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift podName:c089afc3-1655-4675-b4e1-a62ec6929498 nodeName:}" failed. No retries permitted until 2026-02-19 05:42:19.880354536 +0000 UTC m=+1035.913677105 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift") pod "swift-storage-0" (UID: "c089afc3-1655-4675-b4e1-a62ec6929498") : configmap "swift-ring-files" not found Feb 19 05:42:04 crc kubenswrapper[5012]: I0219 05:42:04.719181 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e01828-7818-4fe8-bd3f-8d39e9bf939c" path="/var/lib/kubelet/pods/79e01828-7818-4fe8-bd3f-8d39e9bf939c/volumes" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.133588 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7f7d487d45-bvz4n" podUID="79e01828-7818-4fe8-bd3f-8d39e9bf939c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.103:5353: i/o timeout" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.224388 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b5f0-account-create-update-l7b8m" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.234065 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-vjzm9" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251380 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-24p82"] Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251698 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d9g9k" Feb 19 05:42:05 crc kubenswrapper[5012]: E0219 05:42:05.251734 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e01828-7818-4fe8-bd3f-8d39e9bf939c" containerName="init" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251746 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e01828-7818-4fe8-bd3f-8d39e9bf939c" containerName="init" Feb 19 05:42:05 crc kubenswrapper[5012]: E0219 05:42:05.251756 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251762 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: E0219 05:42:05.251778 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e01828-7818-4fe8-bd3f-8d39e9bf939c" containerName="dnsmasq-dns" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251785 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e01828-7818-4fe8-bd3f-8d39e9bf939c" containerName="dnsmasq-dns" Feb 19 05:42:05 crc kubenswrapper[5012]: E0219 05:42:05.251795 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a75d3b-186a-41d6-92a8-94729c520aa5" containerName="mariadb-account-create-update" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251801 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a75d3b-186a-41d6-92a8-94729c520aa5" containerName="mariadb-account-create-update" Feb 19 05:42:05 crc kubenswrapper[5012]: E0219 05:42:05.251814 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533d4699-332c-4ceb-ad6e-77c680699214" containerName="mariadb-account-create-update" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251820 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="533d4699-332c-4ceb-ad6e-77c680699214" containerName="mariadb-account-create-update" Feb 19 05:42:05 crc kubenswrapper[5012]: E0219 05:42:05.251832 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12f3008a-413a-4fe7-b3c1-773c10b6b2bf" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251838 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="12f3008a-413a-4fe7-b3c1-773c10b6b2bf" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: E0219 05:42:05.251845 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1e7d95a-d78a-4d54-a66b-565114b4823e" containerName="mariadb-account-create-update" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251851 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1e7d95a-d78a-4d54-a66b-565114b4823e" containerName="mariadb-account-create-update" Feb 19 05:42:05 crc kubenswrapper[5012]: E0219 05:42:05.251862 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a973520b-997d-4c23-a056-590c96123e43" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251870 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a973520b-997d-4c23-a056-590c96123e43" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: E0219 05:42:05.251880 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e3020d-901d-4649-9e94-c5c0a4cc523d" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.251885 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e3020d-901d-4649-9e94-c5c0a4cc523d" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.252034 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="533d4699-332c-4ceb-ad6e-77c680699214" containerName="mariadb-account-create-update" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.252048 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1e7d95a-d78a-4d54-a66b-565114b4823e" containerName="mariadb-account-create-update" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.252060 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e01828-7818-4fe8-bd3f-8d39e9bf939c" containerName="dnsmasq-dns" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.252069 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.252082 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1e3020d-901d-4649-9e94-c5c0a4cc523d" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.252089 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="12f3008a-413a-4fe7-b3c1-773c10b6b2bf" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.252096 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a973520b-997d-4c23-a056-590c96123e43" containerName="mariadb-database-create" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.252107 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="90a75d3b-186a-41d6-92a8-94729c520aa5" containerName="mariadb-account-create-update" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.252118 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff1c217f-b6fa-482c-ad1b-5168cb882283" containerName="mariadb-account-create-update" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.252664 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.255769 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.255811 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pmvmf" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.256926 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-34a7-account-create-update-84f2g" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.312809 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld6zd\" (UniqueName: \"kubernetes.io/projected/6e45e098-f689-4015-9871-5f66e5d7bef1-kube-api-access-ld6zd\") pod \"6e45e098-f689-4015-9871-5f66e5d7bef1\" (UID: \"6e45e098-f689-4015-9871-5f66e5d7bef1\") " Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.312951 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/533d4699-332c-4ceb-ad6e-77c680699214-operator-scripts\") pod \"533d4699-332c-4ceb-ad6e-77c680699214\" (UID: \"533d4699-332c-4ceb-ad6e-77c680699214\") " Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313043 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l9wv\" (UniqueName: \"kubernetes.io/projected/a973520b-997d-4c23-a056-590c96123e43-kube-api-access-4l9wv\") pod \"a973520b-997d-4c23-a056-590c96123e43\" (UID: \"a973520b-997d-4c23-a056-590c96123e43\") " Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313086 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqkd8\" (UniqueName: \"kubernetes.io/projected/533d4699-332c-4ceb-ad6e-77c680699214-kube-api-access-fqkd8\") pod \"533d4699-332c-4ceb-ad6e-77c680699214\" (UID: \"533d4699-332c-4ceb-ad6e-77c680699214\") " Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313102 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a973520b-997d-4c23-a056-590c96123e43-operator-scripts\") pod \"a973520b-997d-4c23-a056-590c96123e43\" (UID: \"a973520b-997d-4c23-a056-590c96123e43\") " Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313147 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e45e098-f689-4015-9871-5f66e5d7bef1-operator-scripts\") pod \"6e45e098-f689-4015-9871-5f66e5d7bef1\" (UID: \"6e45e098-f689-4015-9871-5f66e5d7bef1\") " Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313162 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blfg6\" (UniqueName: \"kubernetes.io/projected/ff1c217f-b6fa-482c-ad1b-5168cb882283-kube-api-access-blfg6\") pod \"ff1c217f-b6fa-482c-ad1b-5168cb882283\" (UID: \"ff1c217f-b6fa-482c-ad1b-5168cb882283\") " Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313206 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff1c217f-b6fa-482c-ad1b-5168cb882283-operator-scripts\") pod \"ff1c217f-b6fa-482c-ad1b-5168cb882283\" (UID: \"ff1c217f-b6fa-482c-ad1b-5168cb882283\") " Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313399 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn8l8\" (UniqueName: \"kubernetes.io/projected/31d56d90-ce06-4de3-9edb-2092780e9afe-kube-api-access-kn8l8\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313460 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-config-data\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313485 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-db-sync-config-data\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313513 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-combined-ca-bundle\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313544 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/533d4699-332c-4ceb-ad6e-77c680699214-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "533d4699-332c-4ceb-ad6e-77c680699214" (UID: "533d4699-332c-4ceb-ad6e-77c680699214"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.313577 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a973520b-997d-4c23-a056-590c96123e43-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a973520b-997d-4c23-a056-590c96123e43" (UID: "a973520b-997d-4c23-a056-590c96123e43"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.314208 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e45e098-f689-4015-9871-5f66e5d7bef1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6e45e098-f689-4015-9871-5f66e5d7bef1" (UID: "6e45e098-f689-4015-9871-5f66e5d7bef1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.314653 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff1c217f-b6fa-482c-ad1b-5168cb882283-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff1c217f-b6fa-482c-ad1b-5168cb882283" (UID: "ff1c217f-b6fa-482c-ad1b-5168cb882283"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.315639 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/533d4699-332c-4ceb-ad6e-77c680699214-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.315656 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a973520b-997d-4c23-a056-590c96123e43-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.315666 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e45e098-f689-4015-9871-5f66e5d7bef1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.315676 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff1c217f-b6fa-482c-ad1b-5168cb882283-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.321062 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/533d4699-332c-4ceb-ad6e-77c680699214-kube-api-access-fqkd8" (OuterVolumeSpecName: "kube-api-access-fqkd8") pod "533d4699-332c-4ceb-ad6e-77c680699214" (UID: "533d4699-332c-4ceb-ad6e-77c680699214"). InnerVolumeSpecName "kube-api-access-fqkd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.322566 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-24p82"] Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.323387 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff1c217f-b6fa-482c-ad1b-5168cb882283-kube-api-access-blfg6" (OuterVolumeSpecName: "kube-api-access-blfg6") pod "ff1c217f-b6fa-482c-ad1b-5168cb882283" (UID: "ff1c217f-b6fa-482c-ad1b-5168cb882283"). InnerVolumeSpecName "kube-api-access-blfg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.324603 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a973520b-997d-4c23-a056-590c96123e43-kube-api-access-4l9wv" (OuterVolumeSpecName: "kube-api-access-4l9wv") pod "a973520b-997d-4c23-a056-590c96123e43" (UID: "a973520b-997d-4c23-a056-590c96123e43"). InnerVolumeSpecName "kube-api-access-4l9wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.333976 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e45e098-f689-4015-9871-5f66e5d7bef1-kube-api-access-ld6zd" (OuterVolumeSpecName: "kube-api-access-ld6zd") pod "6e45e098-f689-4015-9871-5f66e5d7bef1" (UID: "6e45e098-f689-4015-9871-5f66e5d7bef1"). InnerVolumeSpecName "kube-api-access-ld6zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.417366 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn8l8\" (UniqueName: \"kubernetes.io/projected/31d56d90-ce06-4de3-9edb-2092780e9afe-kube-api-access-kn8l8\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.417439 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-config-data\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.417466 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-db-sync-config-data\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.417497 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-combined-ca-bundle\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.417576 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l9wv\" (UniqueName: \"kubernetes.io/projected/a973520b-997d-4c23-a056-590c96123e43-kube-api-access-4l9wv\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.417587 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqkd8\" (UniqueName: \"kubernetes.io/projected/533d4699-332c-4ceb-ad6e-77c680699214-kube-api-access-fqkd8\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.417596 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blfg6\" (UniqueName: \"kubernetes.io/projected/ff1c217f-b6fa-482c-ad1b-5168cb882283-kube-api-access-blfg6\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.417606 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld6zd\" (UniqueName: \"kubernetes.io/projected/6e45e098-f689-4015-9871-5f66e5d7bef1-kube-api-access-ld6zd\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.432808 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn8l8\" (UniqueName: \"kubernetes.io/projected/31d56d90-ce06-4de3-9edb-2092780e9afe-kube-api-access-kn8l8\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.432863 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-combined-ca-bundle\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.433001 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-config-data\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.440440 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-db-sync-config-data\") pod \"glance-db-sync-24p82\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.579838 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-24p82" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.580527 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b5f0-account-create-update-l7b8m" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.580521 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b5f0-account-create-update-l7b8m" event={"ID":"533d4699-332c-4ceb-ad6e-77c680699214","Type":"ContainerDied","Data":"6000ce41874befebf1b7c7cc7cbf4ce7340ce07971239d672500e2598326f86a"} Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.580681 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6000ce41874befebf1b7c7cc7cbf4ce7340ce07971239d672500e2598326f86a" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.582224 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d9g9k" event={"ID":"ff1c217f-b6fa-482c-ad1b-5168cb882283","Type":"ContainerDied","Data":"975b4531e6ccb72f2f56ccef48caa1ef5291f994f247b8d40940642b67930d0b"} Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.582252 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d9g9k" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.582254 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="975b4531e6ccb72f2f56ccef48caa1ef5291f994f247b8d40940642b67930d0b" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.585701 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-34a7-account-create-update-84f2g" event={"ID":"6e45e098-f689-4015-9871-5f66e5d7bef1","Type":"ContainerDied","Data":"be5862dc6f34b983db201be5afc0571b16d829c3169057121e4b42ea84e0b0c6"} Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.585724 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be5862dc6f34b983db201be5afc0571b16d829c3169057121e4b42ea84e0b0c6" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.585712 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-34a7-account-create-update-84f2g" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.587139 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-vjzm9" event={"ID":"a973520b-997d-4c23-a056-590c96123e43","Type":"ContainerDied","Data":"314360f6e925c772631000a7ca09cdf8d9f366b9b615304a27d678fd6c7a2d70"} Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.587162 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="314360f6e925c772631000a7ca09cdf8d9f366b9b615304a27d678fd6c7a2d70" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.587168 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-vjzm9" Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.589971 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e31edbd-c20b-420d-8888-cafb392410cd","Type":"ContainerStarted","Data":"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f"} Feb 19 05:42:05 crc kubenswrapper[5012]: I0219 05:42:05.616503 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.027395032 podStartE2EDuration="59.616486123s" podCreationTimestamp="2026-02-19 05:41:06 +0000 UTC" firstStartedPulling="2026-02-19 05:41:08.667585171 +0000 UTC m=+964.700907740" lastFinishedPulling="2026-02-19 05:42:05.256676262 +0000 UTC m=+1021.289998831" observedRunningTime="2026-02-19 05:42:05.61317292 +0000 UTC m=+1021.646495489" watchObservedRunningTime="2026-02-19 05:42:05.616486123 +0000 UTC m=+1021.649808692" Feb 19 05:42:06 crc kubenswrapper[5012]: I0219 05:42:06.025950 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-24p82"] Feb 19 05:42:06 crc kubenswrapper[5012]: W0219 05:42:06.026683 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31d56d90_ce06_4de3_9edb_2092780e9afe.slice/crio-aa61f86cc8d1c9a72406e1f686123f265fff57ea31daf565ee1da1a7dabb6d3f WatchSource:0}: Error finding container aa61f86cc8d1c9a72406e1f686123f265fff57ea31daf565ee1da1a7dabb6d3f: Status 404 returned error can't find the container with id aa61f86cc8d1c9a72406e1f686123f265fff57ea31daf565ee1da1a7dabb6d3f Feb 19 05:42:06 crc kubenswrapper[5012]: I0219 05:42:06.600613 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-24p82" event={"ID":"31d56d90-ce06-4de3-9edb-2092780e9afe","Type":"ContainerStarted","Data":"aa61f86cc8d1c9a72406e1f686123f265fff57ea31daf565ee1da1a7dabb6d3f"} Feb 19 05:42:07 crc kubenswrapper[5012]: I0219 05:42:07.755432 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-d9g9k"] Feb 19 05:42:07 crc kubenswrapper[5012]: I0219 05:42:07.761999 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-d9g9k"] Feb 19 05:42:07 crc kubenswrapper[5012]: I0219 05:42:07.977862 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:07 crc kubenswrapper[5012]: I0219 05:42:07.979003 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:07 crc kubenswrapper[5012]: I0219 05:42:07.982079 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:08 crc kubenswrapper[5012]: I0219 05:42:08.053639 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 19 05:42:08 crc kubenswrapper[5012]: I0219 05:42:08.617273 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:08 crc kubenswrapper[5012]: I0219 05:42:08.724462 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff1c217f-b6fa-482c-ad1b-5168cb882283" path="/var/lib/kubelet/pods/ff1c217f-b6fa-482c-ad1b-5168cb882283/volumes" Feb 19 05:42:11 crc kubenswrapper[5012]: I0219 05:42:11.652050 5012 generic.go:334] "Generic (PLEG): container finished" podID="b0095712-262e-4562-afac-0f2f4372224d" containerID="1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f" exitCode=0 Feb 19 05:42:11 crc kubenswrapper[5012]: I0219 05:42:11.652126 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0095712-262e-4562-afac-0f2f4372224d","Type":"ContainerDied","Data":"1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f"} Feb 19 05:42:11 crc kubenswrapper[5012]: I0219 05:42:11.654569 5012 generic.go:334] "Generic (PLEG): container finished" podID="d05da3bc-6c22-4956-9fab-331eed79d175" containerID="c2e46f5b1d6014395f2cc0ca721ce5c1df3a8f677de34c3d64f89b616ca2d967" exitCode=0 Feb 19 05:42:11 crc kubenswrapper[5012]: I0219 05:42:11.654597 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5vxhd" event={"ID":"d05da3bc-6c22-4956-9fab-331eed79d175","Type":"ContainerDied","Data":"c2e46f5b1d6014395f2cc0ca721ce5c1df3a8f677de34c3d64f89b616ca2d967"} Feb 19 05:42:11 crc kubenswrapper[5012]: I0219 05:42:11.861884 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 05:42:11 crc kubenswrapper[5012]: I0219 05:42:11.862366 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="prometheus" containerID="cri-o://a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c" gracePeriod=600 Feb 19 05:42:11 crc kubenswrapper[5012]: I0219 05:42:11.862572 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="thanos-sidecar" containerID="cri-o://e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f" gracePeriod=600 Feb 19 05:42:11 crc kubenswrapper[5012]: I0219 05:42:11.862678 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="config-reloader" containerID="cri-o://7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0" gracePeriod=600 Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.330224 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.478293 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-web-config\") pod \"1e31edbd-c20b-420d-8888-cafb392410cd\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.478829 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e31edbd-c20b-420d-8888-cafb392410cd-config-out\") pod \"1e31edbd-c20b-420d-8888-cafb392410cd\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.478880 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-2\") pod \"1e31edbd-c20b-420d-8888-cafb392410cd\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.479059 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"1e31edbd-c20b-420d-8888-cafb392410cd\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.479113 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-config\") pod \"1e31edbd-c20b-420d-8888-cafb392410cd\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.479139 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-1\") pod \"1e31edbd-c20b-420d-8888-cafb392410cd\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.479157 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-thanos-prometheus-http-client-file\") pod \"1e31edbd-c20b-420d-8888-cafb392410cd\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.479195 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s694\" (UniqueName: \"kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-kube-api-access-7s694\") pod \"1e31edbd-c20b-420d-8888-cafb392410cd\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.479218 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-0\") pod \"1e31edbd-c20b-420d-8888-cafb392410cd\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.479237 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-tls-assets\") pod \"1e31edbd-c20b-420d-8888-cafb392410cd\" (UID: \"1e31edbd-c20b-420d-8888-cafb392410cd\") " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.479542 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "1e31edbd-c20b-420d-8888-cafb392410cd" (UID: "1e31edbd-c20b-420d-8888-cafb392410cd"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.481048 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "1e31edbd-c20b-420d-8888-cafb392410cd" (UID: "1e31edbd-c20b-420d-8888-cafb392410cd"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.481130 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "1e31edbd-c20b-420d-8888-cafb392410cd" (UID: "1e31edbd-c20b-420d-8888-cafb392410cd"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.483499 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e31edbd-c20b-420d-8888-cafb392410cd-config-out" (OuterVolumeSpecName: "config-out") pod "1e31edbd-c20b-420d-8888-cafb392410cd" (UID: "1e31edbd-c20b-420d-8888-cafb392410cd"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.483648 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "1e31edbd-c20b-420d-8888-cafb392410cd" (UID: "1e31edbd-c20b-420d-8888-cafb392410cd"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.486560 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-kube-api-access-7s694" (OuterVolumeSpecName: "kube-api-access-7s694") pod "1e31edbd-c20b-420d-8888-cafb392410cd" (UID: "1e31edbd-c20b-420d-8888-cafb392410cd"). InnerVolumeSpecName "kube-api-access-7s694". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.489737 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-config" (OuterVolumeSpecName: "config") pod "1e31edbd-c20b-420d-8888-cafb392410cd" (UID: "1e31edbd-c20b-420d-8888-cafb392410cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.490029 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "1e31edbd-c20b-420d-8888-cafb392410cd" (UID: "1e31edbd-c20b-420d-8888-cafb392410cd"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.503230 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "1e31edbd-c20b-420d-8888-cafb392410cd" (UID: "1e31edbd-c20b-420d-8888-cafb392410cd"). InnerVolumeSpecName "pvc-7fbf442c-c467-48a5-9a2f-86a74d778584". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.516559 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-web-config" (OuterVolumeSpecName: "web-config") pod "1e31edbd-c20b-420d-8888-cafb392410cd" (UID: "1e31edbd-c20b-420d-8888-cafb392410cd"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.585040 5012 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1e31edbd-c20b-420d-8888-cafb392410cd-config-out\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.585081 5012 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.585120 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") on node \"crc\" " Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.585132 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.585142 5012 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.585152 5012 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.585162 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s694\" (UniqueName: \"kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-kube-api-access-7s694\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.585171 5012 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1e31edbd-c20b-420d-8888-cafb392410cd-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.585179 5012 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1e31edbd-c20b-420d-8888-cafb392410cd-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.585188 5012 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1e31edbd-c20b-420d-8888-cafb392410cd-web-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.605284 5012 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.605473 5012 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7fbf442c-c467-48a5-9a2f-86a74d778584" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584") on node "crc" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.665282 5012 generic.go:334] "Generic (PLEG): container finished" podID="1e31edbd-c20b-420d-8888-cafb392410cd" containerID="e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f" exitCode=0 Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.665336 5012 generic.go:334] "Generic (PLEG): container finished" podID="1e31edbd-c20b-420d-8888-cafb392410cd" containerID="7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0" exitCode=0 Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.665343 5012 generic.go:334] "Generic (PLEG): container finished" podID="1e31edbd-c20b-420d-8888-cafb392410cd" containerID="a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c" exitCode=0 Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.665344 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e31edbd-c20b-420d-8888-cafb392410cd","Type":"ContainerDied","Data":"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f"} Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.665395 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e31edbd-c20b-420d-8888-cafb392410cd","Type":"ContainerDied","Data":"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0"} Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.665410 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e31edbd-c20b-420d-8888-cafb392410cd","Type":"ContainerDied","Data":"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c"} Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.665419 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1e31edbd-c20b-420d-8888-cafb392410cd","Type":"ContainerDied","Data":"91314d71567782400d0673184328bab50c18185869b638d4949c49d81c11f6bb"} Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.665435 5012 scope.go:117] "RemoveContainer" containerID="e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.665501 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.674193 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0095712-262e-4562-afac-0f2f4372224d","Type":"ContainerStarted","Data":"dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32"} Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.675387 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.686953 5012 reconciler_common.go:293] "Volume detached for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.695855 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.744014692 podStartE2EDuration="1m13.695839525s" podCreationTimestamp="2026-02-19 05:40:59 +0000 UTC" firstStartedPulling="2026-02-19 05:41:01.452812258 +0000 UTC m=+957.486134827" lastFinishedPulling="2026-02-19 05:41:37.404637091 +0000 UTC m=+993.437959660" observedRunningTime="2026-02-19 05:42:12.693382333 +0000 UTC m=+1028.726704902" watchObservedRunningTime="2026-02-19 05:42:12.695839525 +0000 UTC m=+1028.729162094" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.747699 5012 scope.go:117] "RemoveContainer" containerID="7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.764686 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.778204 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.780455 5012 scope.go:117] "RemoveContainer" containerID="a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.804707 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 05:42:12 crc kubenswrapper[5012]: E0219 05:42:12.805664 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="thanos-sidecar" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.805685 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="thanos-sidecar" Feb 19 05:42:12 crc kubenswrapper[5012]: E0219 05:42:12.805710 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1c217f-b6fa-482c-ad1b-5168cb882283" containerName="mariadb-account-create-update" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.805717 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1c217f-b6fa-482c-ad1b-5168cb882283" containerName="mariadb-account-create-update" Feb 19 05:42:12 crc kubenswrapper[5012]: E0219 05:42:12.805728 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e45e098-f689-4015-9871-5f66e5d7bef1" containerName="mariadb-account-create-update" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.805734 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e45e098-f689-4015-9871-5f66e5d7bef1" containerName="mariadb-account-create-update" Feb 19 05:42:12 crc kubenswrapper[5012]: E0219 05:42:12.805744 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="init-config-reloader" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.805751 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="init-config-reloader" Feb 19 05:42:12 crc kubenswrapper[5012]: E0219 05:42:12.805762 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="prometheus" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.805769 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="prometheus" Feb 19 05:42:12 crc kubenswrapper[5012]: E0219 05:42:12.805783 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="config-reloader" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.805789 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="config-reloader" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.805932 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="config-reloader" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.806063 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="thanos-sidecar" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.806078 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e45e098-f689-4015-9871-5f66e5d7bef1" containerName="mariadb-account-create-update" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.806086 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" containerName="prometheus" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.807580 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.810661 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.810838 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.810957 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.811154 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.811257 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.811397 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.811808 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7bqtw" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.812339 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.818249 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.819581 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.832141 5012 scope.go:117] "RemoveContainer" containerID="2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.855778 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lj2kq"] Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.864404 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lj2kq" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.869181 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.888345 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lj2kq"] Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895125 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq95l\" (UniqueName: \"kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-kube-api-access-tq95l\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895177 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895199 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895229 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895251 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895286 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895317 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-config\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895357 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895401 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895426 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8509cc68-c35e-47ea-a634-896143d747ed-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895447 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895466 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.895489 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.897035 5012 scope.go:117] "RemoveContainer" containerID="e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f" Feb 19 05:42:12 crc kubenswrapper[5012]: E0219 05:42:12.897346 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f\": container with ID starting with e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f not found: ID does not exist" containerID="e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.897368 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f"} err="failed to get container status \"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f\": rpc error: code = NotFound desc = could not find container \"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f\": container with ID starting with e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.897387 5012 scope.go:117] "RemoveContainer" containerID="7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0" Feb 19 05:42:12 crc kubenswrapper[5012]: E0219 05:42:12.913567 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0\": container with ID starting with 7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0 not found: ID does not exist" containerID="7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.913610 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0"} err="failed to get container status \"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0\": rpc error: code = NotFound desc = could not find container \"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0\": container with ID starting with 7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0 not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.913635 5012 scope.go:117] "RemoveContainer" containerID="a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c" Feb 19 05:42:12 crc kubenswrapper[5012]: E0219 05:42:12.914514 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c\": container with ID starting with a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c not found: ID does not exist" containerID="a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.914554 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c"} err="failed to get container status \"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c\": rpc error: code = NotFound desc = could not find container \"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c\": container with ID starting with a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.914580 5012 scope.go:117] "RemoveContainer" containerID="2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6" Feb 19 05:42:12 crc kubenswrapper[5012]: E0219 05:42:12.914852 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6\": container with ID starting with 2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6 not found: ID does not exist" containerID="2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.914878 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6"} err="failed to get container status \"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6\": rpc error: code = NotFound desc = could not find container \"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6\": container with ID starting with 2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6 not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.914893 5012 scope.go:117] "RemoveContainer" containerID="e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.915068 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f"} err="failed to get container status \"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f\": rpc error: code = NotFound desc = could not find container \"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f\": container with ID starting with e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.915088 5012 scope.go:117] "RemoveContainer" containerID="7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.915605 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0"} err="failed to get container status \"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0\": rpc error: code = NotFound desc = could not find container \"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0\": container with ID starting with 7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0 not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.915624 5012 scope.go:117] "RemoveContainer" containerID="a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.915792 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c"} err="failed to get container status \"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c\": rpc error: code = NotFound desc = could not find container \"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c\": container with ID starting with a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.915812 5012 scope.go:117] "RemoveContainer" containerID="2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.915988 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6"} err="failed to get container status \"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6\": rpc error: code = NotFound desc = could not find container \"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6\": container with ID starting with 2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6 not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.916008 5012 scope.go:117] "RemoveContainer" containerID="e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.916183 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f"} err="failed to get container status \"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f\": rpc error: code = NotFound desc = could not find container \"e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f\": container with ID starting with e619aaffc0dd6892a0799026e08cbf32f3aedf4ff9fc27c768182dad5106059f not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.916203 5012 scope.go:117] "RemoveContainer" containerID="7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.916408 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0"} err="failed to get container status \"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0\": rpc error: code = NotFound desc = could not find container \"7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0\": container with ID starting with 7af463b6caa3b3ab32e05064897c8ae5d41447f3e0383abaf8871298686229b0 not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.916428 5012 scope.go:117] "RemoveContainer" containerID="a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.916587 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c"} err="failed to get container status \"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c\": rpc error: code = NotFound desc = could not find container \"a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c\": container with ID starting with a329cf16dbb657acad2ad902754356d2b0f9348a86febd26dcd09594f8dc667c not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.916605 5012 scope.go:117] "RemoveContainer" containerID="2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.916776 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6"} err="failed to get container status \"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6\": rpc error: code = NotFound desc = could not find container \"2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6\": container with ID starting with 2fcaf10efcdf8baf46ff6a82a6c3dbd17358400dee7def2ff4c1e047ad89f1a6 not found: ID does not exist" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.997776 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.997851 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq95l\" (UniqueName: \"kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-kube-api-access-tq95l\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.997884 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.997923 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.997953 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.997973 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6zx4\" (UniqueName: \"kubernetes.io/projected/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-kube-api-access-c6zx4\") pod \"root-account-create-update-lj2kq\" (UID: \"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79\") " pod="openstack/root-account-create-update-lj2kq" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.998014 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.998085 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.998113 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-config\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.998134 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-operator-scripts\") pod \"root-account-create-update-lj2kq\" (UID: \"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79\") " pod="openstack/root-account-create-update-lj2kq" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.998175 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.998234 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.998256 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8509cc68-c35e-47ea-a634-896143d747ed-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.998283 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.998319 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.999004 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.999062 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:12 crc kubenswrapper[5012]: I0219 05:42:12.999467 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.005145 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.005199 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.005737 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.006438 5012 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.006465 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/80266977aa18e8991458f1f7d5520b709fb21586520e915bbacb4bc2380e455f/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.008724 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.009153 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.010199 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8509cc68-c35e-47ea-a634-896143d747ed-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.010491 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-config\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.010955 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.014005 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq95l\" (UniqueName: \"kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-kube-api-access-tq95l\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.033776 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.055483 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.099933 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-swiftconf\") pod \"d05da3bc-6c22-4956-9fab-331eed79d175\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.099971 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-scripts\") pod \"d05da3bc-6c22-4956-9fab-331eed79d175\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.100012 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-dispersionconf\") pod \"d05da3bc-6c22-4956-9fab-331eed79d175\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.100090 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-ring-data-devices\") pod \"d05da3bc-6c22-4956-9fab-331eed79d175\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.100127 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-combined-ca-bundle\") pod \"d05da3bc-6c22-4956-9fab-331eed79d175\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.100266 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szrf5\" (UniqueName: \"kubernetes.io/projected/d05da3bc-6c22-4956-9fab-331eed79d175-kube-api-access-szrf5\") pod \"d05da3bc-6c22-4956-9fab-331eed79d175\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.100313 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d05da3bc-6c22-4956-9fab-331eed79d175-etc-swift\") pod \"d05da3bc-6c22-4956-9fab-331eed79d175\" (UID: \"d05da3bc-6c22-4956-9fab-331eed79d175\") " Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.100637 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6zx4\" (UniqueName: \"kubernetes.io/projected/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-kube-api-access-c6zx4\") pod \"root-account-create-update-lj2kq\" (UID: \"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79\") " pod="openstack/root-account-create-update-lj2kq" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.100695 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-operator-scripts\") pod \"root-account-create-update-lj2kq\" (UID: \"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79\") " pod="openstack/root-account-create-update-lj2kq" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.101657 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-operator-scripts\") pod \"root-account-create-update-lj2kq\" (UID: \"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79\") " pod="openstack/root-account-create-update-lj2kq" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.102038 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "d05da3bc-6c22-4956-9fab-331eed79d175" (UID: "d05da3bc-6c22-4956-9fab-331eed79d175"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.106487 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d05da3bc-6c22-4956-9fab-331eed79d175-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d05da3bc-6c22-4956-9fab-331eed79d175" (UID: "d05da3bc-6c22-4956-9fab-331eed79d175"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.114857 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d05da3bc-6c22-4956-9fab-331eed79d175-kube-api-access-szrf5" (OuterVolumeSpecName: "kube-api-access-szrf5") pod "d05da3bc-6c22-4956-9fab-331eed79d175" (UID: "d05da3bc-6c22-4956-9fab-331eed79d175"). InnerVolumeSpecName "kube-api-access-szrf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.120618 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6zx4\" (UniqueName: \"kubernetes.io/projected/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-kube-api-access-c6zx4\") pod \"root-account-create-update-lj2kq\" (UID: \"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79\") " pod="openstack/root-account-create-update-lj2kq" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.120881 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "d05da3bc-6c22-4956-9fab-331eed79d175" (UID: "d05da3bc-6c22-4956-9fab-331eed79d175"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.137484 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d05da3bc-6c22-4956-9fab-331eed79d175" (UID: "d05da3bc-6c22-4956-9fab-331eed79d175"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.141124 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.148672 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "d05da3bc-6c22-4956-9fab-331eed79d175" (UID: "d05da3bc-6c22-4956-9fab-331eed79d175"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.149348 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-scripts" (OuterVolumeSpecName: "scripts") pod "d05da3bc-6c22-4956-9fab-331eed79d175" (UID: "d05da3bc-6c22-4956-9fab-331eed79d175"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.203084 5012 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.203112 5012 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.203121 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.203129 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szrf5\" (UniqueName: \"kubernetes.io/projected/d05da3bc-6c22-4956-9fab-331eed79d175-kube-api-access-szrf5\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.203139 5012 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d05da3bc-6c22-4956-9fab-331eed79d175-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.203146 5012 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d05da3bc-6c22-4956-9fab-331eed79d175-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.203154 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d05da3bc-6c22-4956-9fab-331eed79d175-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.226183 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lj2kq" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.613029 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 05:42:13 crc kubenswrapper[5012]: W0219 05:42:13.623717 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8509cc68_c35e_47ea_a634_896143d747ed.slice/crio-c258d2d68f577aa99acf781abe70e8c1f0bea84a31b7c56b2eca30c2af015cb5 WatchSource:0}: Error finding container c258d2d68f577aa99acf781abe70e8c1f0bea84a31b7c56b2eca30c2af015cb5: Status 404 returned error can't find the container with id c258d2d68f577aa99acf781abe70e8c1f0bea84a31b7c56b2eca30c2af015cb5 Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.694802 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5vxhd" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.699335 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5vxhd" event={"ID":"d05da3bc-6c22-4956-9fab-331eed79d175","Type":"ContainerDied","Data":"474f2d807b97d130f773ae47927296219b201d325e7ae32ec13971a56bf04456"} Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.699383 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="474f2d807b97d130f773ae47927296219b201d325e7ae32ec13971a56bf04456" Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.717021 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8509cc68-c35e-47ea-a634-896143d747ed","Type":"ContainerStarted","Data":"c258d2d68f577aa99acf781abe70e8c1f0bea84a31b7c56b2eca30c2af015cb5"} Feb 19 05:42:13 crc kubenswrapper[5012]: I0219 05:42:13.729643 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lj2kq"] Feb 19 05:42:14 crc kubenswrapper[5012]: I0219 05:42:14.716359 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e31edbd-c20b-420d-8888-cafb392410cd" path="/var/lib/kubelet/pods/1e31edbd-c20b-420d-8888-cafb392410cd/volumes" Feb 19 05:42:14 crc kubenswrapper[5012]: I0219 05:42:14.730218 5012 generic.go:334] "Generic (PLEG): container finished" podID="3c559b49-5b5e-435d-9a6a-66dd1d3cbc79" containerID="93e7f5c5600e832347781d221af700104ca8f39c9c057fb3a233ce4702cf409c" exitCode=0 Feb 19 05:42:14 crc kubenswrapper[5012]: I0219 05:42:14.730267 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lj2kq" event={"ID":"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79","Type":"ContainerDied","Data":"93e7f5c5600e832347781d221af700104ca8f39c9c057fb3a233ce4702cf409c"} Feb 19 05:42:14 crc kubenswrapper[5012]: I0219 05:42:14.730350 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lj2kq" event={"ID":"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79","Type":"ContainerStarted","Data":"768c114b539b2feebb6baf342756cce337f48a86e7b046dcbc36cac8568a33b9"} Feb 19 05:42:14 crc kubenswrapper[5012]: I0219 05:42:14.999041 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cr94m" podUID="e2c9ac17-43ef-4ccb-83b1-e20ee03289de" containerName="ovn-controller" probeResult="failure" output=< Feb 19 05:42:14 crc kubenswrapper[5012]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 05:42:14 crc kubenswrapper[5012]: > Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.012871 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.018917 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7qdpg" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.354533 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cr94m-config-wnbbj"] Feb 19 05:42:15 crc kubenswrapper[5012]: E0219 05:42:15.354963 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d05da3bc-6c22-4956-9fab-331eed79d175" containerName="swift-ring-rebalance" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.354980 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d05da3bc-6c22-4956-9fab-331eed79d175" containerName="swift-ring-rebalance" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.355165 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d05da3bc-6c22-4956-9fab-331eed79d175" containerName="swift-ring-rebalance" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.355806 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.359845 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.361517 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cr94m-config-wnbbj"] Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.450901 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run-ovn\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.450974 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-scripts\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.451019 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-additional-scripts\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.451059 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9lnh\" (UniqueName: \"kubernetes.io/projected/ddbea515-c638-4619-8940-b23d173ceb8b-kube-api-access-s9lnh\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.451098 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-log-ovn\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.451162 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.552685 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run-ovn\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.552765 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-scripts\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.552795 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-additional-scripts\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.552823 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9lnh\" (UniqueName: \"kubernetes.io/projected/ddbea515-c638-4619-8940-b23d173ceb8b-kube-api-access-s9lnh\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.552855 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-log-ovn\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.552917 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.553048 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run-ovn\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.553078 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.553119 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-log-ovn\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.553789 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-additional-scripts\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.556220 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-scripts\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.580852 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9lnh\" (UniqueName: \"kubernetes.io/projected/ddbea515-c638-4619-8940-b23d173ceb8b-kube-api-access-s9lnh\") pod \"ovn-controller-cr94m-config-wnbbj\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:15 crc kubenswrapper[5012]: I0219 05:42:15.741109 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.035797 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lj2kq" Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.063652 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6zx4\" (UniqueName: \"kubernetes.io/projected/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-kube-api-access-c6zx4\") pod \"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79\" (UID: \"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79\") " Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.063773 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-operator-scripts\") pod \"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79\" (UID: \"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79\") " Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.064809 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c559b49-5b5e-435d-9a6a-66dd1d3cbc79" (UID: "3c559b49-5b5e-435d-9a6a-66dd1d3cbc79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.073597 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-kube-api-access-c6zx4" (OuterVolumeSpecName: "kube-api-access-c6zx4") pod "3c559b49-5b5e-435d-9a6a-66dd1d3cbc79" (UID: "3c559b49-5b5e-435d-9a6a-66dd1d3cbc79"). InnerVolumeSpecName "kube-api-access-c6zx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.166378 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6zx4\" (UniqueName: \"kubernetes.io/projected/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-kube-api-access-c6zx4\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.166659 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.435519 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cr94m-config-wnbbj"] Feb 19 05:42:16 crc kubenswrapper[5012]: W0219 05:42:16.453339 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddbea515_c638_4619_8940_b23d173ceb8b.slice/crio-5465c2ffb472fdf8f8ad11824f33375fa18d874c433433fe9fd1a48632f10d90 WatchSource:0}: Error finding container 5465c2ffb472fdf8f8ad11824f33375fa18d874c433433fe9fd1a48632f10d90: Status 404 returned error can't find the container with id 5465c2ffb472fdf8f8ad11824f33375fa18d874c433433fe9fd1a48632f10d90 Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.744976 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8509cc68-c35e-47ea-a634-896143d747ed","Type":"ContainerStarted","Data":"4ee433ab916c49fcf886f80ee6ab1bd1a03ffacf8d9e4d295c0b15de25056e64"} Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.750794 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lj2kq" Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.750915 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lj2kq" event={"ID":"3c559b49-5b5e-435d-9a6a-66dd1d3cbc79","Type":"ContainerDied","Data":"768c114b539b2feebb6baf342756cce337f48a86e7b046dcbc36cac8568a33b9"} Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.751452 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="768c114b539b2feebb6baf342756cce337f48a86e7b046dcbc36cac8568a33b9" Feb 19 05:42:16 crc kubenswrapper[5012]: I0219 05:42:16.753681 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cr94m-config-wnbbj" event={"ID":"ddbea515-c638-4619-8940-b23d173ceb8b","Type":"ContainerStarted","Data":"5465c2ffb472fdf8f8ad11824f33375fa18d874c433433fe9fd1a48632f10d90"} Feb 19 05:42:17 crc kubenswrapper[5012]: I0219 05:42:17.765330 5012 generic.go:334] "Generic (PLEG): container finished" podID="ddbea515-c638-4619-8940-b23d173ceb8b" containerID="d8e57b0f2b52b5aa983f227ca12d7b7d13d90cca4cada2357120cb84084b1554" exitCode=0 Feb 19 05:42:17 crc kubenswrapper[5012]: I0219 05:42:17.765570 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cr94m-config-wnbbj" event={"ID":"ddbea515-c638-4619-8940-b23d173ceb8b","Type":"ContainerDied","Data":"d8e57b0f2b52b5aa983f227ca12d7b7d13d90cca4cada2357120cb84084b1554"} Feb 19 05:42:19 crc kubenswrapper[5012]: I0219 05:42:19.929636 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:42:19 crc kubenswrapper[5012]: I0219 05:42:19.940257 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c089afc3-1655-4675-b4e1-a62ec6929498-etc-swift\") pod \"swift-storage-0\" (UID: \"c089afc3-1655-4675-b4e1-a62ec6929498\") " pod="openstack/swift-storage-0" Feb 19 05:42:20 crc kubenswrapper[5012]: I0219 05:42:20.016541 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-cr94m" Feb 19 05:42:20 crc kubenswrapper[5012]: I0219 05:42:20.173631 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 05:42:22 crc kubenswrapper[5012]: I0219 05:42:22.819882 5012 generic.go:334] "Generic (PLEG): container finished" podID="8509cc68-c35e-47ea-a634-896143d747ed" containerID="4ee433ab916c49fcf886f80ee6ab1bd1a03ffacf8d9e4d295c0b15de25056e64" exitCode=0 Feb 19 05:42:22 crc kubenswrapper[5012]: I0219 05:42:22.820044 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8509cc68-c35e-47ea-a634-896143d747ed","Type":"ContainerDied","Data":"4ee433ab916c49fcf886f80ee6ab1bd1a03ffacf8d9e4d295c0b15de25056e64"} Feb 19 05:42:24 crc kubenswrapper[5012]: I0219 05:42:24.844635 5012 generic.go:334] "Generic (PLEG): container finished" podID="a13d3004-2045-4daf-a925-7eccf541b1b4" containerID="0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e" exitCode=0 Feb 19 05:42:24 crc kubenswrapper[5012]: I0219 05:42:24.844695 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a13d3004-2045-4daf-a925-7eccf541b1b4","Type":"ContainerDied","Data":"0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e"} Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.702323 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.860927 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8509cc68-c35e-47ea-a634-896143d747ed","Type":"ContainerStarted","Data":"2911dc6ac75bd4dfdfed36bc08cc01049520edecc0e49a7a619bb704bce3f33a"} Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.863213 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run-ovn\") pod \"ddbea515-c638-4619-8940-b23d173ceb8b\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.863321 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-log-ovn\") pod \"ddbea515-c638-4619-8940-b23d173ceb8b\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.863430 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run\") pod \"ddbea515-c638-4619-8940-b23d173ceb8b\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.863516 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-additional-scripts\") pod \"ddbea515-c638-4619-8940-b23d173ceb8b\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.863538 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9lnh\" (UniqueName: \"kubernetes.io/projected/ddbea515-c638-4619-8940-b23d173ceb8b-kube-api-access-s9lnh\") pod \"ddbea515-c638-4619-8940-b23d173ceb8b\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.863555 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-scripts\") pod \"ddbea515-c638-4619-8940-b23d173ceb8b\" (UID: \"ddbea515-c638-4619-8940-b23d173ceb8b\") " Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.863697 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run" (OuterVolumeSpecName: "var-run") pod "ddbea515-c638-4619-8940-b23d173ceb8b" (UID: "ddbea515-c638-4619-8940-b23d173ceb8b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.864263 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ddbea515-c638-4619-8940-b23d173ceb8b" (UID: "ddbea515-c638-4619-8940-b23d173ceb8b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.864319 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ddbea515-c638-4619-8940-b23d173ceb8b" (UID: "ddbea515-c638-4619-8940-b23d173ceb8b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.864399 5012 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.864569 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ddbea515-c638-4619-8940-b23d173ceb8b" (UID: "ddbea515-c638-4619-8940-b23d173ceb8b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.865205 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-scripts" (OuterVolumeSpecName: "scripts") pod "ddbea515-c638-4619-8940-b23d173ceb8b" (UID: "ddbea515-c638-4619-8940-b23d173ceb8b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.867663 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddbea515-c638-4619-8940-b23d173ceb8b-kube-api-access-s9lnh" (OuterVolumeSpecName: "kube-api-access-s9lnh") pod "ddbea515-c638-4619-8940-b23d173ceb8b" (UID: "ddbea515-c638-4619-8940-b23d173ceb8b"). InnerVolumeSpecName "kube-api-access-s9lnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.870724 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cr94m-config-wnbbj" event={"ID":"ddbea515-c638-4619-8940-b23d173ceb8b","Type":"ContainerDied","Data":"5465c2ffb472fdf8f8ad11824f33375fa18d874c433433fe9fd1a48632f10d90"} Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.870755 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5465c2ffb472fdf8f8ad11824f33375fa18d874c433433fe9fd1a48632f10d90" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.870809 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cr94m-config-wnbbj" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.872970 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a13d3004-2045-4daf-a925-7eccf541b1b4","Type":"ContainerStarted","Data":"0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7"} Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.873556 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.902915 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371949.95188 podStartE2EDuration="1m26.902896213s" podCreationTimestamp="2026-02-19 05:40:59 +0000 UTC" firstStartedPulling="2026-02-19 05:41:01.860766879 +0000 UTC m=+957.894089438" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:42:25.892783449 +0000 UTC m=+1041.926106018" watchObservedRunningTime="2026-02-19 05:42:25.902896213 +0000 UTC m=+1041.936218782" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.965733 5012 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.965772 5012 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ddbea515-c638-4619-8940-b23d173ceb8b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.965786 5012 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.965799 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9lnh\" (UniqueName: \"kubernetes.io/projected/ddbea515-c638-4619-8940-b23d173ceb8b-kube-api-access-s9lnh\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:25 crc kubenswrapper[5012]: I0219 05:42:25.965808 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ddbea515-c638-4619-8940-b23d173ceb8b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:26 crc kubenswrapper[5012]: I0219 05:42:26.084401 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 05:42:26 crc kubenswrapper[5012]: W0219 05:42:26.094821 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc089afc3_1655_4675_b4e1_a62ec6929498.slice/crio-ccb3124754843d610271b604712236f7d09f9eda736f99a488a3a4169f3c3630 WatchSource:0}: Error finding container ccb3124754843d610271b604712236f7d09f9eda736f99a488a3a4169f3c3630: Status 404 returned error can't find the container with id ccb3124754843d610271b604712236f7d09f9eda736f99a488a3a4169f3c3630 Feb 19 05:42:26 crc kubenswrapper[5012]: I0219 05:42:26.807756 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-cr94m-config-wnbbj"] Feb 19 05:42:26 crc kubenswrapper[5012]: I0219 05:42:26.814996 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-cr94m-config-wnbbj"] Feb 19 05:42:26 crc kubenswrapper[5012]: I0219 05:42:26.881276 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-24p82" event={"ID":"31d56d90-ce06-4de3-9edb-2092780e9afe","Type":"ContainerStarted","Data":"cea9e8e15e555d9e359bdb9e094582010c0f5cb2424bf6d21370cbb196b19806"} Feb 19 05:42:26 crc kubenswrapper[5012]: I0219 05:42:26.882709 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"ccb3124754843d610271b604712236f7d09f9eda736f99a488a3a4169f3c3630"} Feb 19 05:42:26 crc kubenswrapper[5012]: I0219 05:42:26.902177 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-24p82" podStartSLOduration=2.316094413 podStartE2EDuration="21.902162234s" podCreationTimestamp="2026-02-19 05:42:05 +0000 UTC" firstStartedPulling="2026-02-19 05:42:06.028846606 +0000 UTC m=+1022.062169175" lastFinishedPulling="2026-02-19 05:42:25.614914407 +0000 UTC m=+1041.648236996" observedRunningTime="2026-02-19 05:42:26.895071396 +0000 UTC m=+1042.928393965" watchObservedRunningTime="2026-02-19 05:42:26.902162234 +0000 UTC m=+1042.935484793" Feb 19 05:42:27 crc kubenswrapper[5012]: I0219 05:42:27.891395 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"20c23a50bdfbf9304cec5e65cb9884882fe8fa307d92b584685d16b72dcbac8b"} Feb 19 05:42:27 crc kubenswrapper[5012]: I0219 05:42:27.892735 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"8741b7465595a5f514d42d04cd81f4d219a5f0381c43b9b5d436e13968c855ed"} Feb 19 05:42:27 crc kubenswrapper[5012]: I0219 05:42:27.892800 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"2778d95369e66794a3c77675f3e21538fcbdcd4a351b0caa9d18ceb3bdb6f2dd"} Feb 19 05:42:28 crc kubenswrapper[5012]: I0219 05:42:28.722995 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddbea515-c638-4619-8940-b23d173ceb8b" path="/var/lib/kubelet/pods/ddbea515-c638-4619-8940-b23d173ceb8b/volumes" Feb 19 05:42:28 crc kubenswrapper[5012]: I0219 05:42:28.923152 5012 generic.go:334] "Generic (PLEG): container finished" podID="3c628866-f96d-4e7b-8846-7073c98dd389" containerID="39447df96b54f1be84a97ec4a361863f1bba8e92bceec140937b025ac768a708" exitCode=0 Feb 19 05:42:28 crc kubenswrapper[5012]: I0219 05:42:28.923282 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"3c628866-f96d-4e7b-8846-7073c98dd389","Type":"ContainerDied","Data":"39447df96b54f1be84a97ec4a361863f1bba8e92bceec140937b025ac768a708"} Feb 19 05:42:28 crc kubenswrapper[5012]: I0219 05:42:28.929422 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"1c6ce9784b923c0bbc26479298d4861119c1d3b3bfd0e915102814bbf819bbf8"} Feb 19 05:42:29 crc kubenswrapper[5012]: I0219 05:42:29.949182 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"273b9f0584df6769286e4f11afc28c65d47a7cd17b2b7b2f136d0bda5f714c0a"} Feb 19 05:42:29 crc kubenswrapper[5012]: I0219 05:42:29.949668 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"5b5424c23ebc144ff05eae1a5743a9c894ab0c56f4b72231387ca792c10fd8b9"} Feb 19 05:42:29 crc kubenswrapper[5012]: I0219 05:42:29.952710 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8509cc68-c35e-47ea-a634-896143d747ed","Type":"ContainerStarted","Data":"2854f6610edd35f9918bcf970a2c86698cd9bdd18894ce4faa0b91d3747adc47"} Feb 19 05:42:29 crc kubenswrapper[5012]: I0219 05:42:29.952786 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8509cc68-c35e-47ea-a634-896143d747ed","Type":"ContainerStarted","Data":"fd666beb3889b82cdcffe025f5999afc68f3be8d81898ab269494cf52c444649"} Feb 19 05:42:29 crc kubenswrapper[5012]: I0219 05:42:29.955916 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"3c628866-f96d-4e7b-8846-7073c98dd389","Type":"ContainerStarted","Data":"cf4bbbdaf5f2ee97976e817c4b3fd945d8ee608e48e4aa6fcd69d9696509df7a"} Feb 19 05:42:29 crc kubenswrapper[5012]: I0219 05:42:29.956447 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:42:29 crc kubenswrapper[5012]: I0219 05:42:29.998345 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.998317459 podStartE2EDuration="17.998317459s" podCreationTimestamp="2026-02-19 05:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:42:29.984846771 +0000 UTC m=+1046.018169340" watchObservedRunningTime="2026-02-19 05:42:29.998317459 +0000 UTC m=+1046.031640028" Feb 19 05:42:30 crc kubenswrapper[5012]: I0219 05:42:30.032441 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/notifications-rabbitmq-server-0" podStartSLOduration=-9223371945.82236 podStartE2EDuration="1m31.032416086s" podCreationTimestamp="2026-02-19 05:40:59 +0000 UTC" firstStartedPulling="2026-02-19 05:41:01.242175511 +0000 UTC m=+957.275498080" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:42:30.029660847 +0000 UTC m=+1046.062983436" watchObservedRunningTime="2026-02-19 05:42:30.032416086 +0000 UTC m=+1046.065738655" Feb 19 05:42:30 crc kubenswrapper[5012]: I0219 05:42:30.972703 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"6e349173a3c87d71c53312e2eb929429bfb92ac867575ef3a0bc1f3ce175d475"} Feb 19 05:42:30 crc kubenswrapper[5012]: I0219 05:42:30.973092 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"2a1408a7ed3d9b943f0c272db133cf5233981ab5a337afff55e0517ec5872290"} Feb 19 05:42:30 crc kubenswrapper[5012]: I0219 05:42:30.990528 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.587693 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4vdtn"] Feb 19 05:42:31 crc kubenswrapper[5012]: E0219 05:42:31.589485 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddbea515-c638-4619-8940-b23d173ceb8b" containerName="ovn-config" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.589506 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddbea515-c638-4619-8940-b23d173ceb8b" containerName="ovn-config" Feb 19 05:42:31 crc kubenswrapper[5012]: E0219 05:42:31.589536 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c559b49-5b5e-435d-9a6a-66dd1d3cbc79" containerName="mariadb-account-create-update" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.589545 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c559b49-5b5e-435d-9a6a-66dd1d3cbc79" containerName="mariadb-account-create-update" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.590665 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c559b49-5b5e-435d-9a6a-66dd1d3cbc79" containerName="mariadb-account-create-update" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.590691 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddbea515-c638-4619-8940-b23d173ceb8b" containerName="ovn-config" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.591926 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4vdtn" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.626774 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4vdtn"] Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.712025 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6b89-account-create-update-65d6l"] Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.717589 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6b89-account-create-update-65d6l" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.720819 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6b89-account-create-update-65d6l"] Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.724898 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.774996 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97sch\" (UniqueName: \"kubernetes.io/projected/ff889c32-0dda-4734-a907-54f4a53e649f-kube-api-access-97sch\") pod \"cinder-db-create-4vdtn\" (UID: \"ff889c32-0dda-4734-a907-54f4a53e649f\") " pod="openstack/cinder-db-create-4vdtn" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.775133 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff889c32-0dda-4734-a907-54f4a53e649f-operator-scripts\") pod \"cinder-db-create-4vdtn\" (UID: \"ff889c32-0dda-4734-a907-54f4a53e649f\") " pod="openstack/cinder-db-create-4vdtn" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.876763 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97sch\" (UniqueName: \"kubernetes.io/projected/ff889c32-0dda-4734-a907-54f4a53e649f-kube-api-access-97sch\") pod \"cinder-db-create-4vdtn\" (UID: \"ff889c32-0dda-4734-a907-54f4a53e649f\") " pod="openstack/cinder-db-create-4vdtn" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.876806 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f81d2f2-d61b-49e6-bd6a-f466da52df74-operator-scripts\") pod \"cinder-6b89-account-create-update-65d6l\" (UID: \"0f81d2f2-d61b-49e6-bd6a-f466da52df74\") " pod="openstack/cinder-6b89-account-create-update-65d6l" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.876858 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdjdj\" (UniqueName: \"kubernetes.io/projected/0f81d2f2-d61b-49e6-bd6a-f466da52df74-kube-api-access-cdjdj\") pod \"cinder-6b89-account-create-update-65d6l\" (UID: \"0f81d2f2-d61b-49e6-bd6a-f466da52df74\") " pod="openstack/cinder-6b89-account-create-update-65d6l" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.876917 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff889c32-0dda-4734-a907-54f4a53e649f-operator-scripts\") pod \"cinder-db-create-4vdtn\" (UID: \"ff889c32-0dda-4734-a907-54f4a53e649f\") " pod="openstack/cinder-db-create-4vdtn" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.877679 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff889c32-0dda-4734-a907-54f4a53e649f-operator-scripts\") pod \"cinder-db-create-4vdtn\" (UID: \"ff889c32-0dda-4734-a907-54f4a53e649f\") " pod="openstack/cinder-db-create-4vdtn" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.902132 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97sch\" (UniqueName: \"kubernetes.io/projected/ff889c32-0dda-4734-a907-54f4a53e649f-kube-api-access-97sch\") pod \"cinder-db-create-4vdtn\" (UID: \"ff889c32-0dda-4734-a907-54f4a53e649f\") " pod="openstack/cinder-db-create-4vdtn" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.968149 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-x7kz5"] Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.969756 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.978826 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f81d2f2-d61b-49e6-bd6a-f466da52df74-operator-scripts\") pod \"cinder-6b89-account-create-update-65d6l\" (UID: \"0f81d2f2-d61b-49e6-bd6a-f466da52df74\") " pod="openstack/cinder-6b89-account-create-update-65d6l" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.978933 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdjdj\" (UniqueName: \"kubernetes.io/projected/0f81d2f2-d61b-49e6-bd6a-f466da52df74-kube-api-access-cdjdj\") pod \"cinder-6b89-account-create-update-65d6l\" (UID: \"0f81d2f2-d61b-49e6-bd6a-f466da52df74\") " pod="openstack/cinder-6b89-account-create-update-65d6l" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.980177 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f81d2f2-d61b-49e6-bd6a-f466da52df74-operator-scripts\") pod \"cinder-6b89-account-create-update-65d6l\" (UID: \"0f81d2f2-d61b-49e6-bd6a-f466da52df74\") " pod="openstack/cinder-6b89-account-create-update-65d6l" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.981788 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.982007 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.982087 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 05:42:31 crc kubenswrapper[5012]: I0219 05:42:31.982342 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dhq72" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.006062 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4vdtn" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.031232 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x7kz5"] Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.049152 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdjdj\" (UniqueName: \"kubernetes.io/projected/0f81d2f2-d61b-49e6-bd6a-f466da52df74-kube-api-access-cdjdj\") pod \"cinder-6b89-account-create-update-65d6l\" (UID: \"0f81d2f2-d61b-49e6-bd6a-f466da52df74\") " pod="openstack/cinder-6b89-account-create-update-65d6l" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.066510 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-9pk56"] Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.070525 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9pk56" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.077293 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6b89-account-create-update-65d6l" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.080169 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-combined-ca-bundle\") pod \"keystone-db-sync-x7kz5\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.080222 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-config-data\") pod \"keystone-db-sync-x7kz5\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.080260 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvstr\" (UniqueName: \"kubernetes.io/projected/13b820bd-7677-4b9c-a16f-987e22a71876-kube-api-access-rvstr\") pod \"keystone-db-sync-x7kz5\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.104208 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-9pk56"] Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.182964 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jv9c\" (UniqueName: \"kubernetes.io/projected/a4bd4c60-a255-42cf-8dd0-913737e4b189-kube-api-access-2jv9c\") pod \"barbican-db-create-9pk56\" (UID: \"a4bd4c60-a255-42cf-8dd0-913737e4b189\") " pod="openstack/barbican-db-create-9pk56" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.183011 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4bd4c60-a255-42cf-8dd0-913737e4b189-operator-scripts\") pod \"barbican-db-create-9pk56\" (UID: \"a4bd4c60-a255-42cf-8dd0-913737e4b189\") " pod="openstack/barbican-db-create-9pk56" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.183066 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-combined-ca-bundle\") pod \"keystone-db-sync-x7kz5\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.183105 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-config-data\") pod \"keystone-db-sync-x7kz5\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.183137 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvstr\" (UniqueName: \"kubernetes.io/projected/13b820bd-7677-4b9c-a16f-987e22a71876-kube-api-access-rvstr\") pod \"keystone-db-sync-x7kz5\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.188266 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-combined-ca-bundle\") pod \"keystone-db-sync-x7kz5\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.202813 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-config-data\") pod \"keystone-db-sync-x7kz5\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.225035 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvstr\" (UniqueName: \"kubernetes.io/projected/13b820bd-7677-4b9c-a16f-987e22a71876-kube-api-access-rvstr\") pod \"keystone-db-sync-x7kz5\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.244571 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8f98-account-create-update-7gqc9"] Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.245685 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8f98-account-create-update-7gqc9" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.250048 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.285783 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4bd4c60-a255-42cf-8dd0-913737e4b189-operator-scripts\") pod \"barbican-db-create-9pk56\" (UID: \"a4bd4c60-a255-42cf-8dd0-913737e4b189\") " pod="openstack/barbican-db-create-9pk56" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.285830 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jv9c\" (UniqueName: \"kubernetes.io/projected/a4bd4c60-a255-42cf-8dd0-913737e4b189-kube-api-access-2jv9c\") pod \"barbican-db-create-9pk56\" (UID: \"a4bd4c60-a255-42cf-8dd0-913737e4b189\") " pod="openstack/barbican-db-create-9pk56" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.286841 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4bd4c60-a255-42cf-8dd0-913737e4b189-operator-scripts\") pod \"barbican-db-create-9pk56\" (UID: \"a4bd4c60-a255-42cf-8dd0-913737e4b189\") " pod="openstack/barbican-db-create-9pk56" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.290961 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.293804 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8f98-account-create-update-7gqc9"] Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.331042 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jv9c\" (UniqueName: \"kubernetes.io/projected/a4bd4c60-a255-42cf-8dd0-913737e4b189-kube-api-access-2jv9c\") pod \"barbican-db-create-9pk56\" (UID: \"a4bd4c60-a255-42cf-8dd0-913737e4b189\") " pod="openstack/barbican-db-create-9pk56" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.393538 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82wh6\" (UniqueName: \"kubernetes.io/projected/5d452976-060b-4c25-9dd0-ffed69bb4d84-kube-api-access-82wh6\") pod \"barbican-8f98-account-create-update-7gqc9\" (UID: \"5d452976-060b-4c25-9dd0-ffed69bb4d84\") " pod="openstack/barbican-8f98-account-create-update-7gqc9" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.393980 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d452976-060b-4c25-9dd0-ffed69bb4d84-operator-scripts\") pod \"barbican-8f98-account-create-update-7gqc9\" (UID: \"5d452976-060b-4c25-9dd0-ffed69bb4d84\") " pod="openstack/barbican-8f98-account-create-update-7gqc9" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.481240 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9pk56" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.496105 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82wh6\" (UniqueName: \"kubernetes.io/projected/5d452976-060b-4c25-9dd0-ffed69bb4d84-kube-api-access-82wh6\") pod \"barbican-8f98-account-create-update-7gqc9\" (UID: \"5d452976-060b-4c25-9dd0-ffed69bb4d84\") " pod="openstack/barbican-8f98-account-create-update-7gqc9" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.496258 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d452976-060b-4c25-9dd0-ffed69bb4d84-operator-scripts\") pod \"barbican-8f98-account-create-update-7gqc9\" (UID: \"5d452976-060b-4c25-9dd0-ffed69bb4d84\") " pod="openstack/barbican-8f98-account-create-update-7gqc9" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.497454 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d452976-060b-4c25-9dd0-ffed69bb4d84-operator-scripts\") pod \"barbican-8f98-account-create-update-7gqc9\" (UID: \"5d452976-060b-4c25-9dd0-ffed69bb4d84\") " pod="openstack/barbican-8f98-account-create-update-7gqc9" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.513203 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82wh6\" (UniqueName: \"kubernetes.io/projected/5d452976-060b-4c25-9dd0-ffed69bb4d84-kube-api-access-82wh6\") pod \"barbican-8f98-account-create-update-7gqc9\" (UID: \"5d452976-060b-4c25-9dd0-ffed69bb4d84\") " pod="openstack/barbican-8f98-account-create-update-7gqc9" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.563053 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8f98-account-create-update-7gqc9" Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.865840 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6b89-account-create-update-65d6l"] Feb 19 05:42:32 crc kubenswrapper[5012]: W0219 05:42:32.869208 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f81d2f2_d61b_49e6_bd6a_f466da52df74.slice/crio-f8aeddc32aef54de5a1f06c7e67e7c5783b73f3abe911b03a5b2f23f653b20f1 WatchSource:0}: Error finding container f8aeddc32aef54de5a1f06c7e67e7c5783b73f3abe911b03a5b2f23f653b20f1: Status 404 returned error can't find the container with id f8aeddc32aef54de5a1f06c7e67e7c5783b73f3abe911b03a5b2f23f653b20f1 Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.886842 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-9pk56"] Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.886894 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x7kz5"] Feb 19 05:42:32 crc kubenswrapper[5012]: W0219 05:42:32.899019 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4bd4c60_a255_42cf_8dd0_913737e4b189.slice/crio-a837298bb68d726f843f4a99384eb7f1d37d3beb14d3d3e0f4370d8252349c71 WatchSource:0}: Error finding container a837298bb68d726f843f4a99384eb7f1d37d3beb14d3d3e0f4370d8252349c71: Status 404 returned error can't find the container with id a837298bb68d726f843f4a99384eb7f1d37d3beb14d3d3e0f4370d8252349c71 Feb 19 05:42:32 crc kubenswrapper[5012]: I0219 05:42:32.913906 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4vdtn"] Feb 19 05:42:33 crc kubenswrapper[5012]: I0219 05:42:33.017261 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6b89-account-create-update-65d6l" event={"ID":"0f81d2f2-d61b-49e6-bd6a-f466da52df74","Type":"ContainerStarted","Data":"f8aeddc32aef54de5a1f06c7e67e7c5783b73f3abe911b03a5b2f23f653b20f1"} Feb 19 05:42:33 crc kubenswrapper[5012]: I0219 05:42:33.091986 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"d4c9c0d11d236c1c16135ca688ff4e22570fe4685c67a41a44547dd99313e9df"} Feb 19 05:42:33 crc kubenswrapper[5012]: I0219 05:42:33.094972 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9pk56" event={"ID":"a4bd4c60-a255-42cf-8dd0-913737e4b189","Type":"ContainerStarted","Data":"a837298bb68d726f843f4a99384eb7f1d37d3beb14d3d3e0f4370d8252349c71"} Feb 19 05:42:33 crc kubenswrapper[5012]: I0219 05:42:33.100510 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4vdtn" event={"ID":"ff889c32-0dda-4734-a907-54f4a53e649f","Type":"ContainerStarted","Data":"6b839fe2eefe4d15160c6c95535bcf0af59a5e68c5a3438dd6b22907bc36497b"} Feb 19 05:42:33 crc kubenswrapper[5012]: I0219 05:42:33.112444 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x7kz5" event={"ID":"13b820bd-7677-4b9c-a16f-987e22a71876","Type":"ContainerStarted","Data":"df302881738acf3fcead46288efc5eedd0d4c9dce72c4827a234c232e9e538c5"} Feb 19 05:42:33 crc kubenswrapper[5012]: I0219 05:42:33.141904 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:33 crc kubenswrapper[5012]: I0219 05:42:33.233670 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8f98-account-create-update-7gqc9"] Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.149049 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"a6722afb439e0e4e5f625908511f3b87433439a209c9c6f822a8e7fb25099bf2"} Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.149443 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"74ef8280c60348e1b546c43847c14bfe2e1e3bf77e2eeb709ec4586fa34bef6c"} Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.149454 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"fe46ee46195802c82609a963852a86e90054b800032558598d95fc12cdb70a8f"} Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.149463 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"bb9fc034c4b48df255fa2c97100cae4c35cab371d81c811b9ad308a1bb0532dd"} Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.155083 5012 generic.go:334] "Generic (PLEG): container finished" podID="a4bd4c60-a255-42cf-8dd0-913737e4b189" containerID="7e6d7c6e4279d09faf69cc8325c3a9419e59f879f7e638bdadc3e1a99dfe010e" exitCode=0 Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.155144 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9pk56" event={"ID":"a4bd4c60-a255-42cf-8dd0-913737e4b189","Type":"ContainerDied","Data":"7e6d7c6e4279d09faf69cc8325c3a9419e59f879f7e638bdadc3e1a99dfe010e"} Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.158804 5012 generic.go:334] "Generic (PLEG): container finished" podID="ff889c32-0dda-4734-a907-54f4a53e649f" containerID="67bce0df8bf4cde6aebe2e02939680ed6fbf6f5f67dfee6a477ff8a83ddd570c" exitCode=0 Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.158856 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4vdtn" event={"ID":"ff889c32-0dda-4734-a907-54f4a53e649f","Type":"ContainerDied","Data":"67bce0df8bf4cde6aebe2e02939680ed6fbf6f5f67dfee6a477ff8a83ddd570c"} Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.166712 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8f98-account-create-update-7gqc9" event={"ID":"5d452976-060b-4c25-9dd0-ffed69bb4d84","Type":"ContainerStarted","Data":"20962d8cd5b490b4c52f0881b3105ca6e34c9e56c96152f389a414e4e6b49d12"} Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.166867 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8f98-account-create-update-7gqc9" event={"ID":"5d452976-060b-4c25-9dd0-ffed69bb4d84","Type":"ContainerStarted","Data":"8db0b6dffa18b9baf3f0839909f48a13fe7c1eec48f5ac3faf67f36d3af428c0"} Feb 19 05:42:34 crc kubenswrapper[5012]: I0219 05:42:34.185421 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6b89-account-create-update-65d6l" event={"ID":"0f81d2f2-d61b-49e6-bd6a-f466da52df74","Type":"ContainerStarted","Data":"152353fb3f9bf0d9255bd600198a1803f9e2b42292b1e50815808d78b63cdb99"} Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.196537 5012 generic.go:334] "Generic (PLEG): container finished" podID="5d452976-060b-4c25-9dd0-ffed69bb4d84" containerID="20962d8cd5b490b4c52f0881b3105ca6e34c9e56c96152f389a414e4e6b49d12" exitCode=0 Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.196630 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8f98-account-create-update-7gqc9" event={"ID":"5d452976-060b-4c25-9dd0-ffed69bb4d84","Type":"ContainerDied","Data":"20962d8cd5b490b4c52f0881b3105ca6e34c9e56c96152f389a414e4e6b49d12"} Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.202453 5012 generic.go:334] "Generic (PLEG): container finished" podID="0f81d2f2-d61b-49e6-bd6a-f466da52df74" containerID="152353fb3f9bf0d9255bd600198a1803f9e2b42292b1e50815808d78b63cdb99" exitCode=0 Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.202504 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6b89-account-create-update-65d6l" event={"ID":"0f81d2f2-d61b-49e6-bd6a-f466da52df74","Type":"ContainerDied","Data":"152353fb3f9bf0d9255bd600198a1803f9e2b42292b1e50815808d78b63cdb99"} Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.218087 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"73c1a419260a2d85129ae986042cb4cc178077399618b6b623ae5f7b170ca46b"} Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.218120 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c089afc3-1655-4675-b4e1-a62ec6929498","Type":"ContainerStarted","Data":"7f947b1a9c1522b6530cec6ef3b105d86d6b776c3374e7464247a1a9ca114eed"} Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.267632 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=43.755966849 podStartE2EDuration="49.267611855s" podCreationTimestamp="2026-02-19 05:41:46 +0000 UTC" firstStartedPulling="2026-02-19 05:42:26.09735769 +0000 UTC m=+1042.130680249" lastFinishedPulling="2026-02-19 05:42:31.609002686 +0000 UTC m=+1047.642325255" observedRunningTime="2026-02-19 05:42:35.258552458 +0000 UTC m=+1051.291875027" watchObservedRunningTime="2026-02-19 05:42:35.267611855 +0000 UTC m=+1051.300934424" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.567139 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cf875bd99-nt5p5"] Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.572844 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.576241 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.597876 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cf875bd99-nt5p5"] Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.617000 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6b89-account-create-update-65d6l" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.684846 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-sb\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.685021 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-config\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.685119 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npj62\" (UniqueName: \"kubernetes.io/projected/0339ab80-3dab-44ef-aa89-49a810242704-kube-api-access-npj62\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.685274 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-svc\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.685330 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-swift-storage-0\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.685356 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-nb\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.755414 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8f98-account-create-update-7gqc9" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.769847 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9pk56" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.801357 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f81d2f2-d61b-49e6-bd6a-f466da52df74-operator-scripts\") pod \"0f81d2f2-d61b-49e6-bd6a-f466da52df74\" (UID: \"0f81d2f2-d61b-49e6-bd6a-f466da52df74\") " Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.802522 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdjdj\" (UniqueName: \"kubernetes.io/projected/0f81d2f2-d61b-49e6-bd6a-f466da52df74-kube-api-access-cdjdj\") pod \"0f81d2f2-d61b-49e6-bd6a-f466da52df74\" (UID: \"0f81d2f2-d61b-49e6-bd6a-f466da52df74\") " Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.805900 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f81d2f2-d61b-49e6-bd6a-f466da52df74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0f81d2f2-d61b-49e6-bd6a-f466da52df74" (UID: "0f81d2f2-d61b-49e6-bd6a-f466da52df74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.805985 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-config\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.806036 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npj62\" (UniqueName: \"kubernetes.io/projected/0339ab80-3dab-44ef-aa89-49a810242704-kube-api-access-npj62\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.806264 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-svc\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.806314 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-swift-storage-0\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.806345 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-nb\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.806386 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-sb\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.806452 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f81d2f2-d61b-49e6-bd6a-f466da52df74-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.808041 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-svc\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.808798 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-config\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.810058 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-nb\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.811035 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4vdtn" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.817518 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-sb\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.817657 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f81d2f2-d61b-49e6-bd6a-f466da52df74-kube-api-access-cdjdj" (OuterVolumeSpecName: "kube-api-access-cdjdj") pod "0f81d2f2-d61b-49e6-bd6a-f466da52df74" (UID: "0f81d2f2-d61b-49e6-bd6a-f466da52df74"). InnerVolumeSpecName "kube-api-access-cdjdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.817797 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-swift-storage-0\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.835361 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npj62\" (UniqueName: \"kubernetes.io/projected/0339ab80-3dab-44ef-aa89-49a810242704-kube-api-access-npj62\") pod \"dnsmasq-dns-6cf875bd99-nt5p5\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.899017 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.906758 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jv9c\" (UniqueName: \"kubernetes.io/projected/a4bd4c60-a255-42cf-8dd0-913737e4b189-kube-api-access-2jv9c\") pod \"a4bd4c60-a255-42cf-8dd0-913737e4b189\" (UID: \"a4bd4c60-a255-42cf-8dd0-913737e4b189\") " Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.906820 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff889c32-0dda-4734-a907-54f4a53e649f-operator-scripts\") pod \"ff889c32-0dda-4734-a907-54f4a53e649f\" (UID: \"ff889c32-0dda-4734-a907-54f4a53e649f\") " Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.906852 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4bd4c60-a255-42cf-8dd0-913737e4b189-operator-scripts\") pod \"a4bd4c60-a255-42cf-8dd0-913737e4b189\" (UID: \"a4bd4c60-a255-42cf-8dd0-913737e4b189\") " Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.906873 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82wh6\" (UniqueName: \"kubernetes.io/projected/5d452976-060b-4c25-9dd0-ffed69bb4d84-kube-api-access-82wh6\") pod \"5d452976-060b-4c25-9dd0-ffed69bb4d84\" (UID: \"5d452976-060b-4c25-9dd0-ffed69bb4d84\") " Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.906901 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97sch\" (UniqueName: \"kubernetes.io/projected/ff889c32-0dda-4734-a907-54f4a53e649f-kube-api-access-97sch\") pod \"ff889c32-0dda-4734-a907-54f4a53e649f\" (UID: \"ff889c32-0dda-4734-a907-54f4a53e649f\") " Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.907219 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d452976-060b-4c25-9dd0-ffed69bb4d84-operator-scripts\") pod \"5d452976-060b-4c25-9dd0-ffed69bb4d84\" (UID: \"5d452976-060b-4c25-9dd0-ffed69bb4d84\") " Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.907681 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdjdj\" (UniqueName: \"kubernetes.io/projected/0f81d2f2-d61b-49e6-bd6a-f466da52df74-kube-api-access-cdjdj\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.908047 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d452976-060b-4c25-9dd0-ffed69bb4d84-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d452976-060b-4c25-9dd0-ffed69bb4d84" (UID: "5d452976-060b-4c25-9dd0-ffed69bb4d84"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.908887 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff889c32-0dda-4734-a907-54f4a53e649f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff889c32-0dda-4734-a907-54f4a53e649f" (UID: "ff889c32-0dda-4734-a907-54f4a53e649f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.911498 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4bd4c60-a255-42cf-8dd0-913737e4b189-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a4bd4c60-a255-42cf-8dd0-913737e4b189" (UID: "a4bd4c60-a255-42cf-8dd0-913737e4b189"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.914553 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff889c32-0dda-4734-a907-54f4a53e649f-kube-api-access-97sch" (OuterVolumeSpecName: "kube-api-access-97sch") pod "ff889c32-0dda-4734-a907-54f4a53e649f" (UID: "ff889c32-0dda-4734-a907-54f4a53e649f"). InnerVolumeSpecName "kube-api-access-97sch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.914613 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d452976-060b-4c25-9dd0-ffed69bb4d84-kube-api-access-82wh6" (OuterVolumeSpecName: "kube-api-access-82wh6") pod "5d452976-060b-4c25-9dd0-ffed69bb4d84" (UID: "5d452976-060b-4c25-9dd0-ffed69bb4d84"). InnerVolumeSpecName "kube-api-access-82wh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:35 crc kubenswrapper[5012]: I0219 05:42:35.914640 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4bd4c60-a255-42cf-8dd0-913737e4b189-kube-api-access-2jv9c" (OuterVolumeSpecName: "kube-api-access-2jv9c") pod "a4bd4c60-a255-42cf-8dd0-913737e4b189" (UID: "a4bd4c60-a255-42cf-8dd0-913737e4b189"). InnerVolumeSpecName "kube-api-access-2jv9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.009900 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d452976-060b-4c25-9dd0-ffed69bb4d84-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.009937 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jv9c\" (UniqueName: \"kubernetes.io/projected/a4bd4c60-a255-42cf-8dd0-913737e4b189-kube-api-access-2jv9c\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.009950 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff889c32-0dda-4734-a907-54f4a53e649f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.009958 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4bd4c60-a255-42cf-8dd0-913737e4b189-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.009968 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82wh6\" (UniqueName: \"kubernetes.io/projected/5d452976-060b-4c25-9dd0-ffed69bb4d84-kube-api-access-82wh6\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.009977 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97sch\" (UniqueName: \"kubernetes.io/projected/ff889c32-0dda-4734-a907-54f4a53e649f-kube-api-access-97sch\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.228171 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9pk56" event={"ID":"a4bd4c60-a255-42cf-8dd0-913737e4b189","Type":"ContainerDied","Data":"a837298bb68d726f843f4a99384eb7f1d37d3beb14d3d3e0f4370d8252349c71"} Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.228510 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a837298bb68d726f843f4a99384eb7f1d37d3beb14d3d3e0f4370d8252349c71" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.228566 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9pk56" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.230037 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4vdtn" event={"ID":"ff889c32-0dda-4734-a907-54f4a53e649f","Type":"ContainerDied","Data":"6b839fe2eefe4d15160c6c95535bcf0af59a5e68c5a3438dd6b22907bc36497b"} Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.230074 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b839fe2eefe4d15160c6c95535bcf0af59a5e68c5a3438dd6b22907bc36497b" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.230133 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4vdtn" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.232242 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8f98-account-create-update-7gqc9" event={"ID":"5d452976-060b-4c25-9dd0-ffed69bb4d84","Type":"ContainerDied","Data":"8db0b6dffa18b9baf3f0839909f48a13fe7c1eec48f5ac3faf67f36d3af428c0"} Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.232282 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8db0b6dffa18b9baf3f0839909f48a13fe7c1eec48f5ac3faf67f36d3af428c0" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.232369 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8f98-account-create-update-7gqc9" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.235755 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6b89-account-create-update-65d6l" event={"ID":"0f81d2f2-d61b-49e6-bd6a-f466da52df74","Type":"ContainerDied","Data":"f8aeddc32aef54de5a1f06c7e67e7c5783b73f3abe911b03a5b2f23f653b20f1"} Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.235856 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8aeddc32aef54de5a1f06c7e67e7c5783b73f3abe911b03a5b2f23f653b20f1" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.235794 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6b89-account-create-update-65d6l" Feb 19 05:42:36 crc kubenswrapper[5012]: I0219 05:42:36.403473 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cf875bd99-nt5p5"] Feb 19 05:42:37 crc kubenswrapper[5012]: I0219 05:42:37.248676 5012 generic.go:334] "Generic (PLEG): container finished" podID="31d56d90-ce06-4de3-9edb-2092780e9afe" containerID="cea9e8e15e555d9e359bdb9e094582010c0f5cb2424bf6d21370cbb196b19806" exitCode=0 Feb 19 05:42:37 crc kubenswrapper[5012]: I0219 05:42:37.249212 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-24p82" event={"ID":"31d56d90-ce06-4de3-9edb-2092780e9afe","Type":"ContainerDied","Data":"cea9e8e15e555d9e359bdb9e094582010c0f5cb2424bf6d21370cbb196b19806"} Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.268819 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" event={"ID":"0339ab80-3dab-44ef-aa89-49a810242704","Type":"ContainerStarted","Data":"7e45922725c0d6b445eef46d6fb65aea89a26361a9ca27da6cbe105e29fbce60"} Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.271252 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-24p82" event={"ID":"31d56d90-ce06-4de3-9edb-2092780e9afe","Type":"ContainerDied","Data":"aa61f86cc8d1c9a72406e1f686123f265fff57ea31daf565ee1da1a7dabb6d3f"} Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.271281 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa61f86cc8d1c9a72406e1f686123f265fff57ea31daf565ee1da1a7dabb6d3f" Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.312108 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-24p82" Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.505833 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-config-data\") pod \"31d56d90-ce06-4de3-9edb-2092780e9afe\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.505934 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-combined-ca-bundle\") pod \"31d56d90-ce06-4de3-9edb-2092780e9afe\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.505956 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-db-sync-config-data\") pod \"31d56d90-ce06-4de3-9edb-2092780e9afe\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.506110 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn8l8\" (UniqueName: \"kubernetes.io/projected/31d56d90-ce06-4de3-9edb-2092780e9afe-kube-api-access-kn8l8\") pod \"31d56d90-ce06-4de3-9edb-2092780e9afe\" (UID: \"31d56d90-ce06-4de3-9edb-2092780e9afe\") " Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.510544 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d56d90-ce06-4de3-9edb-2092780e9afe-kube-api-access-kn8l8" (OuterVolumeSpecName: "kube-api-access-kn8l8") pod "31d56d90-ce06-4de3-9edb-2092780e9afe" (UID: "31d56d90-ce06-4de3-9edb-2092780e9afe"). InnerVolumeSpecName "kube-api-access-kn8l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.514408 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "31d56d90-ce06-4de3-9edb-2092780e9afe" (UID: "31d56d90-ce06-4de3-9edb-2092780e9afe"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.536628 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31d56d90-ce06-4de3-9edb-2092780e9afe" (UID: "31d56d90-ce06-4de3-9edb-2092780e9afe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.549239 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-config-data" (OuterVolumeSpecName: "config-data") pod "31d56d90-ce06-4de3-9edb-2092780e9afe" (UID: "31d56d90-ce06-4de3-9edb-2092780e9afe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.611168 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn8l8\" (UniqueName: \"kubernetes.io/projected/31d56d90-ce06-4de3-9edb-2092780e9afe-kube-api-access-kn8l8\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.611213 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.611229 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:39 crc kubenswrapper[5012]: I0219 05:42:39.611241 5012 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31d56d90-ce06-4de3-9edb-2092780e9afe-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.280971 5012 generic.go:334] "Generic (PLEG): container finished" podID="0339ab80-3dab-44ef-aa89-49a810242704" containerID="0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda" exitCode=0 Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.281061 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" event={"ID":"0339ab80-3dab-44ef-aa89-49a810242704","Type":"ContainerDied","Data":"0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda"} Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.282999 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-24p82" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.283001 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x7kz5" event={"ID":"13b820bd-7677-4b9c-a16f-987e22a71876","Type":"ContainerStarted","Data":"bc1e75b8122059977fabe9b750a293942be5f1e6a7daf5e75f1e50d40f43dd63"} Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.701927 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-x7kz5" podStartSLOduration=3.521443413 podStartE2EDuration="9.701909507s" podCreationTimestamp="2026-02-19 05:42:31 +0000 UTC" firstStartedPulling="2026-02-19 05:42:32.906512621 +0000 UTC m=+1048.939835190" lastFinishedPulling="2026-02-19 05:42:39.086978715 +0000 UTC m=+1055.120301284" observedRunningTime="2026-02-19 05:42:40.388817199 +0000 UTC m=+1056.422139778" watchObservedRunningTime="2026-02-19 05:42:40.701909507 +0000 UTC m=+1056.735232076" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.706925 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/notifications-rabbitmq-server-0" podUID="3c628866-f96d-4e7b-8846-7073c98dd389" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.712518 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cf875bd99-nt5p5"] Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.744369 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86d445cf77-758rq"] Feb 19 05:42:40 crc kubenswrapper[5012]: E0219 05:42:40.744968 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4bd4c60-a255-42cf-8dd0-913737e4b189" containerName="mariadb-database-create" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.744985 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4bd4c60-a255-42cf-8dd0-913737e4b189" containerName="mariadb-database-create" Feb 19 05:42:40 crc kubenswrapper[5012]: E0219 05:42:40.745002 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff889c32-0dda-4734-a907-54f4a53e649f" containerName="mariadb-database-create" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.745008 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff889c32-0dda-4734-a907-54f4a53e649f" containerName="mariadb-database-create" Feb 19 05:42:40 crc kubenswrapper[5012]: E0219 05:42:40.745023 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f81d2f2-d61b-49e6-bd6a-f466da52df74" containerName="mariadb-account-create-update" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.745030 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f81d2f2-d61b-49e6-bd6a-f466da52df74" containerName="mariadb-account-create-update" Feb 19 05:42:40 crc kubenswrapper[5012]: E0219 05:42:40.745039 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d452976-060b-4c25-9dd0-ffed69bb4d84" containerName="mariadb-account-create-update" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.745046 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d452976-060b-4c25-9dd0-ffed69bb4d84" containerName="mariadb-account-create-update" Feb 19 05:42:40 crc kubenswrapper[5012]: E0219 05:42:40.745066 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31d56d90-ce06-4de3-9edb-2092780e9afe" containerName="glance-db-sync" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.745071 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="31d56d90-ce06-4de3-9edb-2092780e9afe" containerName="glance-db-sync" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.745229 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f81d2f2-d61b-49e6-bd6a-f466da52df74" containerName="mariadb-account-create-update" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.745263 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="31d56d90-ce06-4de3-9edb-2092780e9afe" containerName="glance-db-sync" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.745271 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff889c32-0dda-4734-a907-54f4a53e649f" containerName="mariadb-database-create" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.745290 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4bd4c60-a255-42cf-8dd0-913737e4b189" containerName="mariadb-database-create" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.745325 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d452976-060b-4c25-9dd0-ffed69bb4d84" containerName="mariadb-account-create-update" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.746200 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.758551 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86d445cf77-758rq"] Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.936291 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-sb\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.936370 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-nb\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.936391 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-config\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.936442 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-swift-storage-0\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.936853 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-svc\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:40 crc kubenswrapper[5012]: I0219 05:42:40.936928 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8bxt\" (UniqueName: \"kubernetes.io/projected/22696f62-66c5-4302-b9dc-24a981de161e-kube-api-access-j8bxt\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.038062 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-svc\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.038121 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8bxt\" (UniqueName: \"kubernetes.io/projected/22696f62-66c5-4302-b9dc-24a981de161e-kube-api-access-j8bxt\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.038148 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-sb\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.038180 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-nb\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.038198 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-config\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.038234 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-swift-storage-0\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.039041 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-svc\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.039114 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-swift-storage-0\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.039634 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-nb\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.039649 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-sb\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.040106 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-config\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.075061 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8bxt\" (UniqueName: \"kubernetes.io/projected/22696f62-66c5-4302-b9dc-24a981de161e-kube-api-access-j8bxt\") pod \"dnsmasq-dns-86d445cf77-758rq\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.245528 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.305047 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" event={"ID":"0339ab80-3dab-44ef-aa89-49a810242704","Type":"ContainerStarted","Data":"a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f"} Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.335841 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" podStartSLOduration=6.335821977 podStartE2EDuration="6.335821977s" podCreationTimestamp="2026-02-19 05:42:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:42:41.328668967 +0000 UTC m=+1057.361991556" watchObservedRunningTime="2026-02-19 05:42:41.335821977 +0000 UTC m=+1057.369144556" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.368277 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:41 crc kubenswrapper[5012]: I0219 05:42:41.896863 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86d445cf77-758rq"] Feb 19 05:42:41 crc kubenswrapper[5012]: W0219 05:42:41.900642 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22696f62_66c5_4302_b9dc_24a981de161e.slice/crio-688a7a5c3b7e8a1921835e248109719316499bbe41f38e9de1a4cdf97193bb54 WatchSource:0}: Error finding container 688a7a5c3b7e8a1921835e248109719316499bbe41f38e9de1a4cdf97193bb54: Status 404 returned error can't find the container with id 688a7a5c3b7e8a1921835e248109719316499bbe41f38e9de1a4cdf97193bb54 Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.316916 5012 generic.go:334] "Generic (PLEG): container finished" podID="22696f62-66c5-4302-b9dc-24a981de161e" containerID="4dc545a98a1a2b7d3652e3a5654dc5bb1193d0dcb50d17ba317e9cd60b9261d9" exitCode=0 Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.317249 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" podUID="0339ab80-3dab-44ef-aa89-49a810242704" containerName="dnsmasq-dns" containerID="cri-o://a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f" gracePeriod=10 Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.317399 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d445cf77-758rq" event={"ID":"22696f62-66c5-4302-b9dc-24a981de161e","Type":"ContainerDied","Data":"4dc545a98a1a2b7d3652e3a5654dc5bb1193d0dcb50d17ba317e9cd60b9261d9"} Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.317440 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d445cf77-758rq" event={"ID":"22696f62-66c5-4302-b9dc-24a981de161e","Type":"ContainerStarted","Data":"688a7a5c3b7e8a1921835e248109719316499bbe41f38e9de1a4cdf97193bb54"} Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.317485 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.774904 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.976226 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-config\") pod \"0339ab80-3dab-44ef-aa89-49a810242704\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.976998 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npj62\" (UniqueName: \"kubernetes.io/projected/0339ab80-3dab-44ef-aa89-49a810242704-kube-api-access-npj62\") pod \"0339ab80-3dab-44ef-aa89-49a810242704\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.977106 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-svc\") pod \"0339ab80-3dab-44ef-aa89-49a810242704\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.977390 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-nb\") pod \"0339ab80-3dab-44ef-aa89-49a810242704\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.977566 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-swift-storage-0\") pod \"0339ab80-3dab-44ef-aa89-49a810242704\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.977674 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-sb\") pod \"0339ab80-3dab-44ef-aa89-49a810242704\" (UID: \"0339ab80-3dab-44ef-aa89-49a810242704\") " Feb 19 05:42:42 crc kubenswrapper[5012]: I0219 05:42:42.987460 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0339ab80-3dab-44ef-aa89-49a810242704-kube-api-access-npj62" (OuterVolumeSpecName: "kube-api-access-npj62") pod "0339ab80-3dab-44ef-aa89-49a810242704" (UID: "0339ab80-3dab-44ef-aa89-49a810242704"). InnerVolumeSpecName "kube-api-access-npj62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.017998 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0339ab80-3dab-44ef-aa89-49a810242704" (UID: "0339ab80-3dab-44ef-aa89-49a810242704"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.018406 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0339ab80-3dab-44ef-aa89-49a810242704" (UID: "0339ab80-3dab-44ef-aa89-49a810242704"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.019058 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0339ab80-3dab-44ef-aa89-49a810242704" (UID: "0339ab80-3dab-44ef-aa89-49a810242704"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.030708 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-config" (OuterVolumeSpecName: "config") pod "0339ab80-3dab-44ef-aa89-49a810242704" (UID: "0339ab80-3dab-44ef-aa89-49a810242704"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.031609 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0339ab80-3dab-44ef-aa89-49a810242704" (UID: "0339ab80-3dab-44ef-aa89-49a810242704"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.080000 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npj62\" (UniqueName: \"kubernetes.io/projected/0339ab80-3dab-44ef-aa89-49a810242704-kube-api-access-npj62\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.080038 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.080049 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.080057 5012 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.080065 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.080076 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0339ab80-3dab-44ef-aa89-49a810242704-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.141827 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.148956 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.329166 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d445cf77-758rq" event={"ID":"22696f62-66c5-4302-b9dc-24a981de161e","Type":"ContainerStarted","Data":"9b422b9b679ceafddb401ce42356c632b11ead6742dea8a3065da32fcafda70c"} Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.331268 5012 generic.go:334] "Generic (PLEG): container finished" podID="0339ab80-3dab-44ef-aa89-49a810242704" containerID="a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f" exitCode=0 Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.331349 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" event={"ID":"0339ab80-3dab-44ef-aa89-49a810242704","Type":"ContainerDied","Data":"a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f"} Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.331408 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" event={"ID":"0339ab80-3dab-44ef-aa89-49a810242704","Type":"ContainerDied","Data":"7e45922725c0d6b445eef46d6fb65aea89a26361a9ca27da6cbe105e29fbce60"} Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.331430 5012 scope.go:117] "RemoveContainer" containerID="a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.331543 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cf875bd99-nt5p5" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.337841 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.357335 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86d445cf77-758rq" podStartSLOduration=3.357313577 podStartE2EDuration="3.357313577s" podCreationTimestamp="2026-02-19 05:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:42:43.348223138 +0000 UTC m=+1059.381545717" watchObservedRunningTime="2026-02-19 05:42:43.357313577 +0000 UTC m=+1059.390636146" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.362372 5012 scope.go:117] "RemoveContainer" containerID="0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.374352 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cf875bd99-nt5p5"] Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.383264 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cf875bd99-nt5p5"] Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.385545 5012 scope.go:117] "RemoveContainer" containerID="a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f" Feb 19 05:42:43 crc kubenswrapper[5012]: E0219 05:42:43.385987 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f\": container with ID starting with a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f not found: ID does not exist" containerID="a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.386019 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f"} err="failed to get container status \"a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f\": rpc error: code = NotFound desc = could not find container \"a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f\": container with ID starting with a4f37667ad69dc8d159bc7958f8b0f2d0632a45291229f6e0821f68388783b6f not found: ID does not exist" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.386043 5012 scope.go:117] "RemoveContainer" containerID="0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda" Feb 19 05:42:43 crc kubenswrapper[5012]: E0219 05:42:43.388539 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda\": container with ID starting with 0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda not found: ID does not exist" containerID="0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda" Feb 19 05:42:43 crc kubenswrapper[5012]: I0219 05:42:43.388570 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda"} err="failed to get container status \"0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda\": rpc error: code = NotFound desc = could not find container \"0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda\": container with ID starting with 0e78e210f7ba3b0e504120d0bfc9e33d691962ddf629aafc756ef5e496c7cdda not found: ID does not exist" Feb 19 05:42:44 crc kubenswrapper[5012]: I0219 05:42:44.362458 5012 generic.go:334] "Generic (PLEG): container finished" podID="13b820bd-7677-4b9c-a16f-987e22a71876" containerID="bc1e75b8122059977fabe9b750a293942be5f1e6a7daf5e75f1e50d40f43dd63" exitCode=0 Feb 19 05:42:44 crc kubenswrapper[5012]: I0219 05:42:44.363713 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x7kz5" event={"ID":"13b820bd-7677-4b9c-a16f-987e22a71876","Type":"ContainerDied","Data":"bc1e75b8122059977fabe9b750a293942be5f1e6a7daf5e75f1e50d40f43dd63"} Feb 19 05:42:44 crc kubenswrapper[5012]: I0219 05:42:44.363745 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:44 crc kubenswrapper[5012]: I0219 05:42:44.431486 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:42:44 crc kubenswrapper[5012]: I0219 05:42:44.431553 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:42:44 crc kubenswrapper[5012]: I0219 05:42:44.725012 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0339ab80-3dab-44ef-aa89-49a810242704" path="/var/lib/kubelet/pods/0339ab80-3dab-44ef-aa89-49a810242704/volumes" Feb 19 05:42:45 crc kubenswrapper[5012]: I0219 05:42:45.798236 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:45 crc kubenswrapper[5012]: I0219 05:42:45.941575 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-config-data\") pod \"13b820bd-7677-4b9c-a16f-987e22a71876\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " Feb 19 05:42:45 crc kubenswrapper[5012]: I0219 05:42:45.941690 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvstr\" (UniqueName: \"kubernetes.io/projected/13b820bd-7677-4b9c-a16f-987e22a71876-kube-api-access-rvstr\") pod \"13b820bd-7677-4b9c-a16f-987e22a71876\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " Feb 19 05:42:45 crc kubenswrapper[5012]: I0219 05:42:45.941738 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-combined-ca-bundle\") pod \"13b820bd-7677-4b9c-a16f-987e22a71876\" (UID: \"13b820bd-7677-4b9c-a16f-987e22a71876\") " Feb 19 05:42:45 crc kubenswrapper[5012]: I0219 05:42:45.950279 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13b820bd-7677-4b9c-a16f-987e22a71876-kube-api-access-rvstr" (OuterVolumeSpecName: "kube-api-access-rvstr") pod "13b820bd-7677-4b9c-a16f-987e22a71876" (UID: "13b820bd-7677-4b9c-a16f-987e22a71876"). InnerVolumeSpecName "kube-api-access-rvstr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:45 crc kubenswrapper[5012]: I0219 05:42:45.976747 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13b820bd-7677-4b9c-a16f-987e22a71876" (UID: "13b820bd-7677-4b9c-a16f-987e22a71876"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.008027 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-config-data" (OuterVolumeSpecName: "config-data") pod "13b820bd-7677-4b9c-a16f-987e22a71876" (UID: "13b820bd-7677-4b9c-a16f-987e22a71876"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.044440 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.044559 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvstr\" (UniqueName: \"kubernetes.io/projected/13b820bd-7677-4b9c-a16f-987e22a71876-kube-api-access-rvstr\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.044623 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13b820bd-7677-4b9c-a16f-987e22a71876-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.387296 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x7kz5" event={"ID":"13b820bd-7677-4b9c-a16f-987e22a71876","Type":"ContainerDied","Data":"df302881738acf3fcead46288efc5eedd0d4c9dce72c4827a234c232e9e538c5"} Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.387813 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df302881738acf3fcead46288efc5eedd0d4c9dce72c4827a234c232e9e538c5" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.387741 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x7kz5" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.727358 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-kqsgz"] Feb 19 05:42:46 crc kubenswrapper[5012]: E0219 05:42:46.730930 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b820bd-7677-4b9c-a16f-987e22a71876" containerName="keystone-db-sync" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.731011 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b820bd-7677-4b9c-a16f-987e22a71876" containerName="keystone-db-sync" Feb 19 05:42:46 crc kubenswrapper[5012]: E0219 05:42:46.731069 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0339ab80-3dab-44ef-aa89-49a810242704" containerName="dnsmasq-dns" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.731118 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0339ab80-3dab-44ef-aa89-49a810242704" containerName="dnsmasq-dns" Feb 19 05:42:46 crc kubenswrapper[5012]: E0219 05:42:46.731141 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0339ab80-3dab-44ef-aa89-49a810242704" containerName="init" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.731147 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0339ab80-3dab-44ef-aa89-49a810242704" containerName="init" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.731335 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0339ab80-3dab-44ef-aa89-49a810242704" containerName="dnsmasq-dns" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.731350 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="13b820bd-7677-4b9c-a16f-987e22a71876" containerName="keystone-db-sync" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.731939 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.737239 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.737483 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.737576 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.737536 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dhq72" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.737792 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.745777 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kqsgz"] Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.789820 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86d445cf77-758rq"] Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.790036 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86d445cf77-758rq" podUID="22696f62-66c5-4302-b9dc-24a981de161e" containerName="dnsmasq-dns" containerID="cri-o://9b422b9b679ceafddb401ce42356c632b11ead6742dea8a3065da32fcafda70c" gracePeriod=10 Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.862868 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-credential-keys\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.862944 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-fernet-keys\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.863002 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-scripts\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.863044 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-config-data\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.863061 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7hsh\" (UniqueName: \"kubernetes.io/projected/25558255-c27f-4f6e-a838-675ae8ec77b6-kube-api-access-d7hsh\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.863095 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-combined-ca-bundle\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.900521 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cc5b45897-x97md"] Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.947191 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cc5b45897-x97md"] Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.947291 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.971139 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-credential-keys\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.973077 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-fernet-keys\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.973221 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-scripts\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.973263 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-config-data\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.973279 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7hsh\" (UniqueName: \"kubernetes.io/projected/25558255-c27f-4f6e-a838-675ae8ec77b6-kube-api-access-d7hsh\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.973326 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-combined-ca-bundle\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.977864 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-credential-keys\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.977996 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-fernet-keys\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.987222 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-config-data\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.990723 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-scripts\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:46 crc kubenswrapper[5012]: I0219 05:42:46.994850 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-combined-ca-bundle\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.071920 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7hsh\" (UniqueName: \"kubernetes.io/projected/25558255-c27f-4f6e-a838-675ae8ec77b6-kube-api-access-d7hsh\") pod \"keystone-bootstrap-kqsgz\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.075886 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbncd\" (UniqueName: \"kubernetes.io/projected/3d569c4f-6582-4673-a847-2243e668635d-kube-api-access-dbncd\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.075970 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-swift-storage-0\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.076001 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-config\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.076031 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-svc\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.076102 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-nb\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.076154 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-sb\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.179260 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-swift-storage-0\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.179574 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-config\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.179700 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-svc\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.179773 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-nb\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.179829 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-sb\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.179867 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbncd\" (UniqueName: \"kubernetes.io/projected/3d569c4f-6582-4673-a847-2243e668635d-kube-api-access-dbncd\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.180698 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-swift-storage-0\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.180994 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-config\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.182004 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-nb\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.182135 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-svc\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.182958 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-sb\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.197742 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-56f66dc579-dpndj"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.202650 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.206493 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.206836 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.221779 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.222619 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-2s75z" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.241273 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbncd\" (UniqueName: \"kubernetes.io/projected/3d569c4f-6582-4673-a847-2243e668635d-kube-api-access-dbncd\") pod \"dnsmasq-dns-cc5b45897-x97md\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.248650 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56f66dc579-dpndj"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.283054 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.286446 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.288542 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.289348 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.289908 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.349008 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-w9g6v"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.350187 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.363583 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dzmq8" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.363836 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.366954 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.369328 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.377399 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.389205 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cb1825de-9782-4820-96aa-d4909a0f7820-horizon-secret-key\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.389288 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb1825de-9782-4820-96aa-d4909a0f7820-logs\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.389333 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-config-data\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.389383 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9stt\" (UniqueName: \"kubernetes.io/projected/cb1825de-9782-4820-96aa-d4909a0f7820-kube-api-access-p9stt\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.389444 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-scripts\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.393858 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-w9g6v"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.446259 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cc5b45897-x97md"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.449317 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.451169 5012 generic.go:334] "Generic (PLEG): container finished" podID="22696f62-66c5-4302-b9dc-24a981de161e" containerID="9b422b9b679ceafddb401ce42356c632b11ead6742dea8a3065da32fcafda70c" exitCode=0 Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.453517 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d445cf77-758rq" event={"ID":"22696f62-66c5-4302-b9dc-24a981de161e","Type":"ContainerDied","Data":"9b422b9b679ceafddb401ce42356c632b11ead6742dea8a3065da32fcafda70c"} Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.453691 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.459357 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pmvmf" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.459747 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.459876 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.459988 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.464826 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69cc8c4d6f-zkg8h"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.472392 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.490514 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9stt\" (UniqueName: \"kubernetes.io/projected/cb1825de-9782-4820-96aa-d4909a0f7820-kube-api-access-p9stt\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.492539 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rnk9\" (UniqueName: \"kubernetes.io/projected/be803869-4625-418d-bd39-bdbb4e6e0bfd-kube-api-access-6rnk9\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.491638 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.492658 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be803869-4625-418d-bd39-bdbb4e6e0bfd-logs\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.492727 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-scripts\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.492788 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-config-data\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.492993 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cb1825de-9782-4820-96aa-d4909a0f7820-horizon-secret-key\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493070 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-run-httpd\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493244 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493265 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qkmd\" (UniqueName: \"kubernetes.io/projected/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-kube-api-access-6qkmd\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493508 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-combined-ca-bundle\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493551 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493644 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb1825de-9782-4820-96aa-d4909a0f7820-logs\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493703 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-config-data\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493728 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-scripts\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493764 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-config-data\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493817 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-log-httpd\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493853 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-scripts\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.493879 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-scripts\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.499899 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cb1825de-9782-4820-96aa-d4909a0f7820-horizon-secret-key\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.499997 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb1825de-9782-4820-96aa-d4909a0f7820-logs\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.504894 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5c45b5647f-k799c"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.508064 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.512417 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9stt\" (UniqueName: \"kubernetes.io/projected/cb1825de-9782-4820-96aa-d4909a0f7820-kube-api-access-p9stt\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.513191 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-config-data\") pod \"horizon-56f66dc579-dpndj\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.517600 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69cc8c4d6f-zkg8h"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.542943 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c45b5647f-k799c"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.562713 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.564628 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.566828 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.567613 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.573165 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.591097 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601071 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601244 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frcn7\" (UniqueName: \"kubernetes.io/projected/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-kube-api-access-frcn7\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601351 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-log-httpd\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601473 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-scripts\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601544 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-scripts\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601707 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-svc\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601774 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601803 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rnk9\" (UniqueName: \"kubernetes.io/projected/be803869-4625-418d-bd39-bdbb4e6e0bfd-kube-api-access-6rnk9\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601818 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-config\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601834 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-config-data\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601885 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhfdv\" (UniqueName: \"kubernetes.io/projected/13bff5bd-2005-4cce-986a-5bcd2d5a396c-kube-api-access-zhfdv\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601922 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.601970 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be803869-4625-418d-bd39-bdbb4e6e0bfd-logs\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602034 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-logs\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602059 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-log-httpd\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602078 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-config-data\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602519 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-swift-storage-0\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602599 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-run-httpd\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602633 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be803869-4625-418d-bd39-bdbb4e6e0bfd-logs\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602719 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602742 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qkmd\" (UniqueName: \"kubernetes.io/projected/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-kube-api-access-6qkmd\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602768 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-combined-ca-bundle\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602796 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602829 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-nb\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602870 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-sb\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602903 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602938 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-scripts\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.602964 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-config-data\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.605246 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-scripts\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.607070 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-config-data\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.607539 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-run-httpd\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.608090 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-jzclm"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.610248 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.612273 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-hg9kp" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.613629 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.613792 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-config-data\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.614292 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-combined-ca-bundle\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.618641 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-scripts\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.619011 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.627574 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.637542 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jzclm"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.637553 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qkmd\" (UniqueName: \"kubernetes.io/projected/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-kube-api-access-6qkmd\") pod \"ceilometer-0\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.639026 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rnk9\" (UniqueName: \"kubernetes.io/projected/be803869-4625-418d-bd39-bdbb4e6e0bfd-kube-api-access-6rnk9\") pod \"placement-db-sync-w9g6v\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.704156 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-logs\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.705464 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.705544 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.705703 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwgzm\" (UniqueName: \"kubernetes.io/projected/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-kube-api-access-hwgzm\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.705771 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d5eb71f6-31df-418a-98dd-11668ff38825-horizon-secret-key\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.705840 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-svc\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.705912 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.705978 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-config\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706054 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-config-data\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706122 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-combined-ca-bundle\") pod \"barbican-db-sync-jzclm\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706189 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhfdv\" (UniqueName: \"kubernetes.io/projected/13bff5bd-2005-4cce-986a-5bcd2d5a396c-kube-api-access-zhfdv\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706252 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5eb71f6-31df-418a-98dd-11668ff38825-logs\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706334 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706405 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-db-sync-config-data\") pod \"barbican-db-sync-jzclm\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706497 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-scripts\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706563 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706627 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-logs\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706694 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706775 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706842 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-swift-storage-0\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706913 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-config-data\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.706995 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.707077 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z95mm\" (UniqueName: \"kubernetes.io/projected/a34a979c-9102-471f-9678-048fd5198cb8-kube-api-access-z95mm\") pod \"barbican-db-sync-jzclm\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.707143 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-nb\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.707210 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-sb\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.707342 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.707421 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq85g\" (UniqueName: \"kubernetes.io/projected/d5eb71f6-31df-418a-98dd-11668ff38825-kube-api-access-sq85g\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.707501 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.707564 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frcn7\" (UniqueName: \"kubernetes.io/projected/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-kube-api-access-frcn7\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.707636 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-scripts\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.710751 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w9g6v" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.711598 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-logs\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.712451 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-swift-storage-0\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.712497 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-nb\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.713119 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.713277 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-scripts\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.713885 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-config\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.714226 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-svc\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.714376 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.714426 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-sb\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.724758 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.729421 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.730136 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-config-data\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.733200 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhfdv\" (UniqueName: \"kubernetes.io/projected/13bff5bd-2005-4cce-986a-5bcd2d5a396c-kube-api-access-zhfdv\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.733802 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frcn7\" (UniqueName: \"kubernetes.io/projected/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-kube-api-access-frcn7\") pod \"dnsmasq-dns-69cc8c4d6f-zkg8h\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.751426 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cc5b45897-x97md"] Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.765832 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.800624 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810034 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810428 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810471 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-config-data\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810515 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810535 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z95mm\" (UniqueName: \"kubernetes.io/projected/a34a979c-9102-471f-9678-048fd5198cb8-kube-api-access-z95mm\") pod \"barbican-db-sync-jzclm\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810564 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq85g\" (UniqueName: \"kubernetes.io/projected/d5eb71f6-31df-418a-98dd-11668ff38825-kube-api-access-sq85g\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810602 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-logs\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810629 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810644 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810661 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwgzm\" (UniqueName: \"kubernetes.io/projected/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-kube-api-access-hwgzm\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810679 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d5eb71f6-31df-418a-98dd-11668ff38825-horizon-secret-key\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810707 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-combined-ca-bundle\") pod \"barbican-db-sync-jzclm\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810727 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5eb71f6-31df-418a-98dd-11668ff38825-logs\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810748 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-db-sync-config-data\") pod \"barbican-db-sync-jzclm\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810769 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-scripts\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810786 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.810819 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.811941 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.813079 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-logs\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.813429 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-scripts\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.813689 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.817922 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.818106 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.818126 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-config-data\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.818373 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5eb71f6-31df-418a-98dd-11668ff38825-logs\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.820861 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-db-sync-config-data\") pod \"barbican-db-sync-jzclm\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.831624 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-combined-ca-bundle\") pod \"barbican-db-sync-jzclm\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.832120 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.841961 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.854803 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d5eb71f6-31df-418a-98dd-11668ff38825-horizon-secret-key\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.865397 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq85g\" (UniqueName: \"kubernetes.io/projected/d5eb71f6-31df-418a-98dd-11668ff38825-kube-api-access-sq85g\") pod \"horizon-5c45b5647f-k799c\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.868042 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z95mm\" (UniqueName: \"kubernetes.io/projected/a34a979c-9102-471f-9678-048fd5198cb8-kube-api-access-z95mm\") pod \"barbican-db-sync-jzclm\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.881380 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwgzm\" (UniqueName: \"kubernetes.io/projected/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-kube-api-access-hwgzm\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.916984 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:42:47 crc kubenswrapper[5012]: I0219 05:42:47.948623 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jzclm" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.007850 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.076391 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kqsgz"] Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.137974 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.153403 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56f66dc579-dpndj"] Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.191750 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.271405 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-w9g6v"] Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.330918 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.448069 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8bxt\" (UniqueName: \"kubernetes.io/projected/22696f62-66c5-4302-b9dc-24a981de161e-kube-api-access-j8bxt\") pod \"22696f62-66c5-4302-b9dc-24a981de161e\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.448146 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-nb\") pod \"22696f62-66c5-4302-b9dc-24a981de161e\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.448214 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-swift-storage-0\") pod \"22696f62-66c5-4302-b9dc-24a981de161e\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.448245 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-svc\") pod \"22696f62-66c5-4302-b9dc-24a981de161e\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.448270 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-config\") pod \"22696f62-66c5-4302-b9dc-24a981de161e\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.448332 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-sb\") pod \"22696f62-66c5-4302-b9dc-24a981de161e\" (UID: \"22696f62-66c5-4302-b9dc-24a981de161e\") " Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.454708 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22696f62-66c5-4302-b9dc-24a981de161e-kube-api-access-j8bxt" (OuterVolumeSpecName: "kube-api-access-j8bxt") pod "22696f62-66c5-4302-b9dc-24a981de161e" (UID: "22696f62-66c5-4302-b9dc-24a981de161e"). InnerVolumeSpecName "kube-api-access-j8bxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.475028 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56f66dc579-dpndj" event={"ID":"cb1825de-9782-4820-96aa-d4909a0f7820","Type":"ContainerStarted","Data":"9f3f55ad97ef9c22bb96987a2cbaf0c250aabbcc040a9c414cfd11f3987fe5ea"} Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.482038 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w9g6v" event={"ID":"be803869-4625-418d-bd39-bdbb4e6e0bfd","Type":"ContainerStarted","Data":"1bf63392d872a713c6fdde27be345aac65be8a37d2e0427ef52052d66a795c4c"} Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.500093 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kqsgz" event={"ID":"25558255-c27f-4f6e-a838-675ae8ec77b6","Type":"ContainerStarted","Data":"1512f5c39e8f19ace9b3040d9e0b560368c4be80736d22a497fc8bc26c80da61"} Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.504258 5012 generic.go:334] "Generic (PLEG): container finished" podID="3d569c4f-6582-4673-a847-2243e668635d" containerID="6e7d52bd562ed0efb2a1545f6259d1676314bebb149958773ecde4d783c3e952" exitCode=0 Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.504312 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cc5b45897-x97md" event={"ID":"3d569c4f-6582-4673-a847-2243e668635d","Type":"ContainerDied","Data":"6e7d52bd562ed0efb2a1545f6259d1676314bebb149958773ecde4d783c3e952"} Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.504329 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cc5b45897-x97md" event={"ID":"3d569c4f-6582-4673-a847-2243e668635d","Type":"ContainerStarted","Data":"7d63ba2ee2576ca17b6efd2d48b316dd1077ddf56df509a2d331138612d7e898"} Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.537857 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "22696f62-66c5-4302-b9dc-24a981de161e" (UID: "22696f62-66c5-4302-b9dc-24a981de161e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.538106 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86d445cf77-758rq" event={"ID":"22696f62-66c5-4302-b9dc-24a981de161e","Type":"ContainerDied","Data":"688a7a5c3b7e8a1921835e248109719316499bbe41f38e9de1a4cdf97193bb54"} Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.538147 5012 scope.go:117] "RemoveContainer" containerID="9b422b9b679ceafddb401ce42356c632b11ead6742dea8a3065da32fcafda70c" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.538402 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86d445cf77-758rq" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.540040 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69cc8c4d6f-zkg8h"] Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.550656 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.550681 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8bxt\" (UniqueName: \"kubernetes.io/projected/22696f62-66c5-4302-b9dc-24a981de161e-kube-api-access-j8bxt\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.616669 5012 scope.go:117] "RemoveContainer" containerID="4dc545a98a1a2b7d3652e3a5654dc5bb1193d0dcb50d17ba317e9cd60b9261d9" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.647057 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jzclm"] Feb 19 05:42:48 crc kubenswrapper[5012]: W0219 05:42:48.681778 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda34a979c_9102_471f_9678_048fd5198cb8.slice/crio-15ee0e6aea238f0e16da222d8f4f49d691f91234f9216b9e8070275343d6a969 WatchSource:0}: Error finding container 15ee0e6aea238f0e16da222d8f4f49d691f91234f9216b9e8070275343d6a969: Status 404 returned error can't find the container with id 15ee0e6aea238f0e16da222d8f4f49d691f91234f9216b9e8070275343d6a969 Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.690252 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.716624 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "22696f62-66c5-4302-b9dc-24a981de161e" (UID: "22696f62-66c5-4302-b9dc-24a981de161e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.731589 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-config" (OuterVolumeSpecName: "config") pod "22696f62-66c5-4302-b9dc-24a981de161e" (UID: "22696f62-66c5-4302-b9dc-24a981de161e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.740551 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "22696f62-66c5-4302-b9dc-24a981de161e" (UID: "22696f62-66c5-4302-b9dc-24a981de161e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.755917 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.755950 5012 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.755961 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.759460 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "22696f62-66c5-4302-b9dc-24a981de161e" (UID: "22696f62-66c5-4302-b9dc-24a981de161e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:48 crc kubenswrapper[5012]: W0219 05:42:48.762170 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bd6edb4_0376_458f_bb9d_f24e5e7ff47b.slice/crio-7e8d6baa89d2887533fedd350653f8112826dc19a88f8494ecc19699d4368a44 WatchSource:0}: Error finding container 7e8d6baa89d2887533fedd350653f8112826dc19a88f8494ecc19699d4368a44: Status 404 returned error can't find the container with id 7e8d6baa89d2887533fedd350653f8112826dc19a88f8494ecc19699d4368a44 Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.762249 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.877526 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22696f62-66c5-4302-b9dc-24a981de161e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.911261 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c45b5647f-k799c"] Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.960645 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86d445cf77-758rq"] Feb 19 05:42:48 crc kubenswrapper[5012]: I0219 05:42:48.992917 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86d445cf77-758rq"] Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.079836 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.172254 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.182159 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-nb\") pod \"3d569c4f-6582-4673-a847-2243e668635d\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.182255 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbncd\" (UniqueName: \"kubernetes.io/projected/3d569c4f-6582-4673-a847-2243e668635d-kube-api-access-dbncd\") pod \"3d569c4f-6582-4673-a847-2243e668635d\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.182310 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-svc\") pod \"3d569c4f-6582-4673-a847-2243e668635d\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.182346 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-config\") pod \"3d569c4f-6582-4673-a847-2243e668635d\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.182464 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-sb\") pod \"3d569c4f-6582-4673-a847-2243e668635d\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.182511 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-swift-storage-0\") pod \"3d569c4f-6582-4673-a847-2243e668635d\" (UID: \"3d569c4f-6582-4673-a847-2243e668635d\") " Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.239796 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d569c4f-6582-4673-a847-2243e668635d-kube-api-access-dbncd" (OuterVolumeSpecName: "kube-api-access-dbncd") pod "3d569c4f-6582-4673-a847-2243e668635d" (UID: "3d569c4f-6582-4673-a847-2243e668635d"). InnerVolumeSpecName "kube-api-access-dbncd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.253926 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3d569c4f-6582-4673-a847-2243e668635d" (UID: "3d569c4f-6582-4673-a847-2243e668635d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.265664 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-config" (OuterVolumeSpecName: "config") pod "3d569c4f-6582-4673-a847-2243e668635d" (UID: "3d569c4f-6582-4673-a847-2243e668635d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.266854 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3d569c4f-6582-4673-a847-2243e668635d" (UID: "3d569c4f-6582-4673-a847-2243e668635d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.284809 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3d569c4f-6582-4673-a847-2243e668635d" (UID: "3d569c4f-6582-4673-a847-2243e668635d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.285896 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbncd\" (UniqueName: \"kubernetes.io/projected/3d569c4f-6582-4673-a847-2243e668635d-kube-api-access-dbncd\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.285915 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.285923 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.285931 5012 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.285939 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.317920 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3d569c4f-6582-4673-a847-2243e668635d" (UID: "3d569c4f-6582-4673-a847-2243e668635d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.387056 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d569c4f-6582-4673-a847-2243e668635d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.588331 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jzclm" event={"ID":"a34a979c-9102-471f-9678-048fd5198cb8","Type":"ContainerStarted","Data":"15ee0e6aea238f0e16da222d8f4f49d691f91234f9216b9e8070275343d6a969"} Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.593165 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" event={"ID":"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7","Type":"ContainerStarted","Data":"46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe"} Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.593263 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" event={"ID":"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7","Type":"ContainerStarted","Data":"a1291378cdde1b6340e354ff4d89e75f3fa2d7a84c8a3f64370b1decfc0c8b1c"} Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.598201 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kqsgz" event={"ID":"25558255-c27f-4f6e-a838-675ae8ec77b6","Type":"ContainerStarted","Data":"12a292fc1b8e4523fdc0fb30ca3590a1b6b6f0c70c3e42e076f92a7b213241f2"} Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.609134 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cc5b45897-x97md" event={"ID":"3d569c4f-6582-4673-a847-2243e668635d","Type":"ContainerDied","Data":"7d63ba2ee2576ca17b6efd2d48b316dd1077ddf56df509a2d331138612d7e898"} Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.609182 5012 scope.go:117] "RemoveContainer" containerID="6e7d52bd562ed0efb2a1545f6259d1676314bebb149958773ecde4d783c3e952" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.609262 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cc5b45897-x97md" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.624531 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"13bff5bd-2005-4cce-986a-5bcd2d5a396c","Type":"ContainerStarted","Data":"15902dc00744af1a937cdb4358bfbecf5055d748ea34206757e15f3417243c32"} Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.632571 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b53da41-1ee4-4a06-b1ad-2f689fafd2be","Type":"ContainerStarted","Data":"d5e045aceaaad28fe4dee87429ebb210f9d9b506f56cee1ed148bccaa4202c45"} Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.640434 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-kqsgz" podStartSLOduration=3.640401788 podStartE2EDuration="3.640401788s" podCreationTimestamp="2026-02-19 05:42:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:42:49.629975006 +0000 UTC m=+1065.663297575" watchObservedRunningTime="2026-02-19 05:42:49.640401788 +0000 UTC m=+1065.673724357" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.658185 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c45b5647f-k799c" event={"ID":"d5eb71f6-31df-418a-98dd-11668ff38825","Type":"ContainerStarted","Data":"86338dd7d36f9586a8f23b3288040adf41c4f986fc6d17aadaff0853e2749dd7"} Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.659631 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b","Type":"ContainerStarted","Data":"7e8d6baa89d2887533fedd350653f8112826dc19a88f8494ecc19699d4368a44"} Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.731897 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cc5b45897-x97md"] Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.829119 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cc5b45897-x97md"] Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.884707 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56f66dc579-dpndj"] Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.939478 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.966348 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-855998b9f9-lkm6w"] Feb 19 05:42:49 crc kubenswrapper[5012]: E0219 05:42:49.966788 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22696f62-66c5-4302-b9dc-24a981de161e" containerName="dnsmasq-dns" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.966801 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="22696f62-66c5-4302-b9dc-24a981de161e" containerName="dnsmasq-dns" Feb 19 05:42:49 crc kubenswrapper[5012]: E0219 05:42:49.966830 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d569c4f-6582-4673-a847-2243e668635d" containerName="init" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.966839 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d569c4f-6582-4673-a847-2243e668635d" containerName="init" Feb 19 05:42:49 crc kubenswrapper[5012]: E0219 05:42:49.966857 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22696f62-66c5-4302-b9dc-24a981de161e" containerName="init" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.966863 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="22696f62-66c5-4302-b9dc-24a981de161e" containerName="init" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.967051 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="22696f62-66c5-4302-b9dc-24a981de161e" containerName="dnsmasq-dns" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.967065 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d569c4f-6582-4673-a847-2243e668635d" containerName="init" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.968098 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:49 crc kubenswrapper[5012]: I0219 05:42:49.987516 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-855998b9f9-lkm6w"] Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.006677 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.018886 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.118938 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-scripts\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.119410 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-config-data\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.119472 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f06c7918-a7b3-4041-bd16-63a73e47bf13-logs\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.119540 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f06c7918-a7b3-4041-bd16-63a73e47bf13-horizon-secret-key\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.119562 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdkkv\" (UniqueName: \"kubernetes.io/projected/f06c7918-a7b3-4041-bd16-63a73e47bf13-kube-api-access-rdkkv\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.228669 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f06c7918-a7b3-4041-bd16-63a73e47bf13-logs\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.228726 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f06c7918-a7b3-4041-bd16-63a73e47bf13-horizon-secret-key\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.228752 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdkkv\" (UniqueName: \"kubernetes.io/projected/f06c7918-a7b3-4041-bd16-63a73e47bf13-kube-api-access-rdkkv\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.228797 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-scripts\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.228851 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-config-data\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.230050 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-config-data\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.230666 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f06c7918-a7b3-4041-bd16-63a73e47bf13-logs\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.231052 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-scripts\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.236046 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f06c7918-a7b3-4041-bd16-63a73e47bf13-horizon-secret-key\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.249724 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdkkv\" (UniqueName: \"kubernetes.io/projected/f06c7918-a7b3-4041-bd16-63a73e47bf13-kube-api-access-rdkkv\") pod \"horizon-855998b9f9-lkm6w\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.355934 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.689862 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"13bff5bd-2005-4cce-986a-5bcd2d5a396c","Type":"ContainerStarted","Data":"10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc"} Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.695615 5012 generic.go:334] "Generic (PLEG): container finished" podID="848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" containerID="46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe" exitCode=0 Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.695715 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" event={"ID":"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7","Type":"ContainerDied","Data":"46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe"} Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.695773 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" event={"ID":"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7","Type":"ContainerStarted","Data":"cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23"} Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.695791 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.715164 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22696f62-66c5-4302-b9dc-24a981de161e" path="/var/lib/kubelet/pods/22696f62-66c5-4302-b9dc-24a981de161e/volumes" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.715932 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d569c4f-6582-4673-a847-2243e668635d" path="/var/lib/kubelet/pods/3d569c4f-6582-4673-a847-2243e668635d/volumes" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.720630 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/notifications-rabbitmq-server-0" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.723775 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" podStartSLOduration=3.7237547920000003 podStartE2EDuration="3.723754792s" podCreationTimestamp="2026-02-19 05:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:42:50.723442094 +0000 UTC m=+1066.756764683" watchObservedRunningTime="2026-02-19 05:42:50.723754792 +0000 UTC m=+1066.757077361" Feb 19 05:42:50 crc kubenswrapper[5012]: I0219 05:42:50.936801 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-855998b9f9-lkm6w"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.289590 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-xj7dw"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.290834 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.308550 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-gfhmj"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.309683 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gfhmj" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.320290 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.320475 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.320582 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-c2ldt" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.340467 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c45b5647f-k799c"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.367756 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xj7dw"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.394243 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-gfhmj"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.451360 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-cdj57"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.452604 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.460355 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-6chdl" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.460541 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.461499 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-combined-ca-bundle\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.461565 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-db-sync-config-data\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.461611 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-config-data\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.461631 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwnxp\" (UniqueName: \"kubernetes.io/projected/8c63064a-a5f1-48da-b11c-eb76b04e3397-kube-api-access-fwnxp\") pod \"neutron-db-create-gfhmj\" (UID: \"8c63064a-a5f1-48da-b11c-eb76b04e3397\") " pod="openstack/neutron-db-create-gfhmj" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.461667 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c63064a-a5f1-48da-b11c-eb76b04e3397-operator-scripts\") pod \"neutron-db-create-gfhmj\" (UID: \"8c63064a-a5f1-48da-b11c-eb76b04e3397\") " pod="openstack/neutron-db-create-gfhmj" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.461694 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sghmp\" (UniqueName: \"kubernetes.io/projected/b98c972c-b350-44a1-a7c5-028914fe7bfc-kube-api-access-sghmp\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.461737 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b98c972c-b350-44a1-a7c5-028914fe7bfc-etc-machine-id\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.461755 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-scripts\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.466012 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-cdj57"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.529566 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-75cc7d9585-x8r8l"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.532089 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.570709 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwnxp\" (UniqueName: \"kubernetes.io/projected/8c63064a-a5f1-48da-b11c-eb76b04e3397-kube-api-access-fwnxp\") pod \"neutron-db-create-gfhmj\" (UID: \"8c63064a-a5f1-48da-b11c-eb76b04e3397\") " pod="openstack/neutron-db-create-gfhmj" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.570788 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c63064a-a5f1-48da-b11c-eb76b04e3397-operator-scripts\") pod \"neutron-db-create-gfhmj\" (UID: \"8c63064a-a5f1-48da-b11c-eb76b04e3397\") " pod="openstack/neutron-db-create-gfhmj" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.570829 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-config-data\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.570859 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sghmp\" (UniqueName: \"kubernetes.io/projected/b98c972c-b350-44a1-a7c5-028914fe7bfc-kube-api-access-sghmp\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.570906 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b98c972c-b350-44a1-a7c5-028914fe7bfc-etc-machine-id\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.570931 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq87n\" (UniqueName: \"kubernetes.io/projected/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-kube-api-access-fq87n\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.570960 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-scripts\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.570993 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-combined-ca-bundle\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.571025 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-db-sync-config-data\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.571068 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-db-sync-config-data\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.571118 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-combined-ca-bundle\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.571143 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-config-data\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.581087 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c63064a-a5f1-48da-b11c-eb76b04e3397-operator-scripts\") pod \"neutron-db-create-gfhmj\" (UID: \"8c63064a-a5f1-48da-b11c-eb76b04e3397\") " pod="openstack/neutron-db-create-gfhmj" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.581913 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c723-account-create-update-n6sg9"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.582107 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b98c972c-b350-44a1-a7c5-028914fe7bfc-etc-machine-id\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.598494 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c723-account-create-update-n6sg9" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.606000 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.630604 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-db-sync-config-data\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.630887 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-combined-ca-bundle\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.644733 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75cc7d9585-x8r8l"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.644774 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-scripts\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.645132 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sghmp\" (UniqueName: \"kubernetes.io/projected/b98c972c-b350-44a1-a7c5-028914fe7bfc-kube-api-access-sghmp\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.663383 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwnxp\" (UniqueName: \"kubernetes.io/projected/8c63064a-a5f1-48da-b11c-eb76b04e3397-kube-api-access-fwnxp\") pod \"neutron-db-create-gfhmj\" (UID: \"8c63064a-a5f1-48da-b11c-eb76b04e3397\") " pod="openstack/neutron-db-create-gfhmj" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677038 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-scripts\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677100 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd86f802-eef3-479a-870a-e34e7ce028ba-operator-scripts\") pod \"neutron-c723-account-create-update-n6sg9\" (UID: \"cd86f802-eef3-479a-870a-e34e7ce028ba\") " pod="openstack/neutron-c723-account-create-update-n6sg9" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677151 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-config-data\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677190 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-combined-ca-bundle\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677270 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c163961-185c-418b-a0f5-a4d55b59f3ec-logs\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677351 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-config-data\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677383 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c163961-185c-418b-a0f5-a4d55b59f3ec-horizon-secret-key\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677469 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq87n\" (UniqueName: \"kubernetes.io/projected/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-kube-api-access-fq87n\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677522 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9sfn\" (UniqueName: \"kubernetes.io/projected/7c163961-185c-418b-a0f5-a4d55b59f3ec-kube-api-access-d9sfn\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677576 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-db-sync-config-data\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.677601 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rhkv\" (UniqueName: \"kubernetes.io/projected/cd86f802-eef3-479a-870a-e34e7ce028ba-kube-api-access-8rhkv\") pod \"neutron-c723-account-create-update-n6sg9\" (UID: \"cd86f802-eef3-479a-870a-e34e7ce028ba\") " pod="openstack/neutron-c723-account-create-update-n6sg9" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.684187 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-config-data\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.692215 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-db-sync-config-data\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.698017 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c723-account-create-update-n6sg9"] Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.698888 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-config-data\") pod \"cinder-db-sync-xj7dw\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.705268 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq87n\" (UniqueName: \"kubernetes.io/projected/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-kube-api-access-fq87n\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.708962 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-combined-ca-bundle\") pod \"watcher-db-sync-cdj57\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.791619 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c163961-185c-418b-a0f5-a4d55b59f3ec-logs\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.791682 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c163961-185c-418b-a0f5-a4d55b59f3ec-horizon-secret-key\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.791755 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9sfn\" (UniqueName: \"kubernetes.io/projected/7c163961-185c-418b-a0f5-a4d55b59f3ec-kube-api-access-d9sfn\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.791786 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rhkv\" (UniqueName: \"kubernetes.io/projected/cd86f802-eef3-479a-870a-e34e7ce028ba-kube-api-access-8rhkv\") pod \"neutron-c723-account-create-update-n6sg9\" (UID: \"cd86f802-eef3-479a-870a-e34e7ce028ba\") " pod="openstack/neutron-c723-account-create-update-n6sg9" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.791826 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-scripts\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.791846 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd86f802-eef3-479a-870a-e34e7ce028ba-operator-scripts\") pod \"neutron-c723-account-create-update-n6sg9\" (UID: \"cd86f802-eef3-479a-870a-e34e7ce028ba\") " pod="openstack/neutron-c723-account-create-update-n6sg9" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.791869 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-config-data\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.793032 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-config-data\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.793241 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c163961-185c-418b-a0f5-a4d55b59f3ec-logs\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.794824 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-scripts\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.795495 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd86f802-eef3-479a-870a-e34e7ce028ba-operator-scripts\") pod \"neutron-c723-account-create-update-n6sg9\" (UID: \"cd86f802-eef3-479a-870a-e34e7ce028ba\") " pod="openstack/neutron-c723-account-create-update-n6sg9" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.809941 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-cdj57" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.810600 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" containerName="glance-log" containerID="cri-o://10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc" gracePeriod=30 Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.814745 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" containerName="glance-httpd" containerID="cri-o://eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1" gracePeriod=30 Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.818683 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c163961-185c-418b-a0f5-a4d55b59f3ec-horizon-secret-key\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.824095 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9sfn\" (UniqueName: \"kubernetes.io/projected/7c163961-185c-418b-a0f5-a4d55b59f3ec-kube-api-access-d9sfn\") pod \"horizon-75cc7d9585-x8r8l\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.845833 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rhkv\" (UniqueName: \"kubernetes.io/projected/cd86f802-eef3-479a-870a-e34e7ce028ba-kube-api-access-8rhkv\") pod \"neutron-c723-account-create-update-n6sg9\" (UID: \"cd86f802-eef3-479a-870a-e34e7ce028ba\") " pod="openstack/neutron-c723-account-create-update-n6sg9" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.857149 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b53da41-1ee4-4a06-b1ad-2f689fafd2be","Type":"ContainerStarted","Data":"81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e"} Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.860864 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.860854076 podStartE2EDuration="4.860854076s" podCreationTimestamp="2026-02-19 05:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:42:51.856870356 +0000 UTC m=+1067.890192925" watchObservedRunningTime="2026-02-19 05:42:51.860854076 +0000 UTC m=+1067.894176645" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.865939 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-855998b9f9-lkm6w" event={"ID":"f06c7918-a7b3-4041-bd16-63a73e47bf13","Type":"ContainerStarted","Data":"fe94aacdf3b8c844dc9abab1e415854e4e47ea212d07513a67fc4c1411f63f3f"} Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.880974 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.941445 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:42:51 crc kubenswrapper[5012]: I0219 05:42:51.952379 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gfhmj" Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.029585 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c723-account-create-update-n6sg9" Feb 19 05:42:52 crc kubenswrapper[5012]: E0219 05:42:52.112077 5012 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13bff5bd_2005_4cce_986a_5bcd2d5a396c.slice/crio-10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13bff5bd_2005_4cce_986a_5bcd2d5a396c.slice/crio-conmon-eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1.scope\": RecentStats: unable to find data in memory cache]" Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.699438 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75cc7d9585-x8r8l"] Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.863063 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.881497 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-cdj57"] Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.887464 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75cc7d9585-x8r8l" event={"ID":"7c163961-185c-418b-a0f5-a4d55b59f3ec","Type":"ContainerStarted","Data":"7cfa7cf48e4edcddab8aec2d0bfb0aeea8557ac2316f0d4b2e00c1aa2310cba1"} Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.955165 5012 generic.go:334] "Generic (PLEG): container finished" podID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" containerID="eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1" exitCode=143 Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.955205 5012 generic.go:334] "Generic (PLEG): container finished" podID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" containerID="10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc" exitCode=143 Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.955348 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.955397 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"13bff5bd-2005-4cce-986a-5bcd2d5a396c","Type":"ContainerDied","Data":"eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1"} Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.955478 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"13bff5bd-2005-4cce-986a-5bcd2d5a396c","Type":"ContainerDied","Data":"10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc"} Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.955505 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"13bff5bd-2005-4cce-986a-5bcd2d5a396c","Type":"ContainerDied","Data":"15902dc00744af1a937cdb4358bfbecf5055d748ea34206757e15f3417243c32"} Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.955530 5012 scope.go:117] "RemoveContainer" containerID="eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1" Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.973968 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b53da41-1ee4-4a06-b1ad-2f689fafd2be","Type":"ContainerStarted","Data":"7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d"} Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.974444 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" containerName="glance-log" containerID="cri-o://81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e" gracePeriod=30 Feb 19 05:42:52 crc kubenswrapper[5012]: I0219 05:42:52.974864 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" containerName="glance-httpd" containerID="cri-o://7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d" gracePeriod=30 Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.022341 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-httpd-run\") pod \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.022431 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-combined-ca-bundle\") pod \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.022457 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.022507 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-scripts\") pod \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.022566 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-logs\") pod \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.022620 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-config-data\") pod \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.022726 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-public-tls-certs\") pod \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.022783 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhfdv\" (UniqueName: \"kubernetes.io/projected/13bff5bd-2005-4cce-986a-5bcd2d5a396c-kube-api-access-zhfdv\") pod \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\" (UID: \"13bff5bd-2005-4cce-986a-5bcd2d5a396c\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.030391 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "13bff5bd-2005-4cce-986a-5bcd2d5a396c" (UID: "13bff5bd-2005-4cce-986a-5bcd2d5a396c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.030484 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xj7dw"] Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.036642 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-logs" (OuterVolumeSpecName: "logs") pod "13bff5bd-2005-4cce-986a-5bcd2d5a396c" (UID: "13bff5bd-2005-4cce-986a-5bcd2d5a396c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.039610 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "13bff5bd-2005-4cce-986a-5bcd2d5a396c" (UID: "13bff5bd-2005-4cce-986a-5bcd2d5a396c"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.048406 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-scripts" (OuterVolumeSpecName: "scripts") pod "13bff5bd-2005-4cce-986a-5bcd2d5a396c" (UID: "13bff5bd-2005-4cce-986a-5bcd2d5a396c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.049127 5012 scope.go:117] "RemoveContainer" containerID="10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.055987 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.055951709 podStartE2EDuration="6.055951709s" podCreationTimestamp="2026-02-19 05:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:42:53.007178213 +0000 UTC m=+1069.040500782" watchObservedRunningTime="2026-02-19 05:42:53.055951709 +0000 UTC m=+1069.089274278" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.064553 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13bff5bd-2005-4cce-986a-5bcd2d5a396c-kube-api-access-zhfdv" (OuterVolumeSpecName: "kube-api-access-zhfdv") pod "13bff5bd-2005-4cce-986a-5bcd2d5a396c" (UID: "13bff5bd-2005-4cce-986a-5bcd2d5a396c"). InnerVolumeSpecName "kube-api-access-zhfdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.074881 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13bff5bd-2005-4cce-986a-5bcd2d5a396c" (UID: "13bff5bd-2005-4cce-986a-5bcd2d5a396c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.088453 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c723-account-create-update-n6sg9"] Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.099608 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-gfhmj"] Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.110199 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-config-data" (OuterVolumeSpecName: "config-data") pod "13bff5bd-2005-4cce-986a-5bcd2d5a396c" (UID: "13bff5bd-2005-4cce-986a-5bcd2d5a396c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.120178 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "13bff5bd-2005-4cce-986a-5bcd2d5a396c" (UID: "13bff5bd-2005-4cce-986a-5bcd2d5a396c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.134813 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhfdv\" (UniqueName: \"kubernetes.io/projected/13bff5bd-2005-4cce-986a-5bcd2d5a396c-kube-api-access-zhfdv\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.134855 5012 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.134866 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.134904 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.134914 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.134922 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13bff5bd-2005-4cce-986a-5bcd2d5a396c-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.134930 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.134938 5012 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/13bff5bd-2005-4cce-986a-5bcd2d5a396c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.168640 5012 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.237441 5012 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.360705 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.394711 5012 scope.go:117] "RemoveContainer" containerID="eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.403948 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:42:53 crc kubenswrapper[5012]: E0219 05:42:53.414179 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1\": container with ID starting with eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1 not found: ID does not exist" containerID="eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.414258 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1"} err="failed to get container status \"eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1\": rpc error: code = NotFound desc = could not find container \"eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1\": container with ID starting with eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1 not found: ID does not exist" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.414296 5012 scope.go:117] "RemoveContainer" containerID="10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc" Feb 19 05:42:53 crc kubenswrapper[5012]: E0219 05:42:53.414768 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc\": container with ID starting with 10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc not found: ID does not exist" containerID="10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.414796 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc"} err="failed to get container status \"10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc\": rpc error: code = NotFound desc = could not find container \"10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc\": container with ID starting with 10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc not found: ID does not exist" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.414810 5012 scope.go:117] "RemoveContainer" containerID="eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.415102 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1"} err="failed to get container status \"eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1\": rpc error: code = NotFound desc = could not find container \"eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1\": container with ID starting with eda9f4dd835a8568df86dc46f98cc45f357abf02a495c28823bc89494fd708c1 not found: ID does not exist" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.415130 5012 scope.go:117] "RemoveContainer" containerID="10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.415419 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc"} err="failed to get container status \"10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc\": rpc error: code = NotFound desc = could not find container \"10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc\": container with ID starting with 10e77a8b257e4ef7a95fdb43a9aa22642c72d6058e9a4ba32c58d086f971f8cc not found: ID does not exist" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.421037 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:42:53 crc kubenswrapper[5012]: E0219 05:42:53.421824 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" containerName="glance-log" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.421839 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" containerName="glance-log" Feb 19 05:42:53 crc kubenswrapper[5012]: E0219 05:42:53.421880 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" containerName="glance-httpd" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.421887 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" containerName="glance-httpd" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.422062 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" containerName="glance-log" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.422077 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" containerName="glance-httpd" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.424070 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.429401 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.432012 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.432287 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.559954 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.560027 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-config-data\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.560055 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-scripts\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.560166 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.560359 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.560436 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhpwq\" (UniqueName: \"kubernetes.io/projected/88a90a35-c893-4857-9f8b-9a405c96c044-kube-api-access-fhpwq\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.560481 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-logs\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.560589 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.662695 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.662746 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-config-data\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.662762 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-scripts\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.662796 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.662844 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.662873 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhpwq\" (UniqueName: \"kubernetes.io/projected/88a90a35-c893-4857-9f8b-9a405c96c044-kube-api-access-fhpwq\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.662897 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-logs\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.662929 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.664084 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.664347 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.664703 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-logs\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.672206 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.681111 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.681600 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-config-data\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.686179 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-scripts\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.691858 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhpwq\" (UniqueName: \"kubernetes.io/projected/88a90a35-c893-4857-9f8b-9a405c96c044-kube-api-access-fhpwq\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.721516 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.759158 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.833155 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.968950 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.969384 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwgzm\" (UniqueName: \"kubernetes.io/projected/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-kube-api-access-hwgzm\") pod \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.969492 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-logs\") pod \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.969534 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-internal-tls-certs\") pod \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.969567 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-combined-ca-bundle\") pod \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.969619 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-config-data\") pod \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.969640 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-httpd-run\") pod \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.969683 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-scripts\") pod \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\" (UID: \"0b53da41-1ee4-4a06-b1ad-2f689fafd2be\") " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.970653 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-logs" (OuterVolumeSpecName: "logs") pod "0b53da41-1ee4-4a06-b1ad-2f689fafd2be" (UID: "0b53da41-1ee4-4a06-b1ad-2f689fafd2be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.973536 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "0b53da41-1ee4-4a06-b1ad-2f689fafd2be" (UID: "0b53da41-1ee4-4a06-b1ad-2f689fafd2be"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.975710 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-scripts" (OuterVolumeSpecName: "scripts") pod "0b53da41-1ee4-4a06-b1ad-2f689fafd2be" (UID: "0b53da41-1ee4-4a06-b1ad-2f689fafd2be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.983416 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-kube-api-access-hwgzm" (OuterVolumeSpecName: "kube-api-access-hwgzm") pod "0b53da41-1ee4-4a06-b1ad-2f689fafd2be" (UID: "0b53da41-1ee4-4a06-b1ad-2f689fafd2be"). InnerVolumeSpecName "kube-api-access-hwgzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.985815 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0b53da41-1ee4-4a06-b1ad-2f689fafd2be" (UID: "0b53da41-1ee4-4a06-b1ad-2f689fafd2be"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.998157 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.998191 5012 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.998200 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.998228 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 19 05:42:53 crc kubenswrapper[5012]: I0219 05:42:53.998238 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwgzm\" (UniqueName: \"kubernetes.io/projected/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-kube-api-access-hwgzm\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.019934 5012 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.026837 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xj7dw" event={"ID":"b98c972c-b350-44a1-a7c5-028914fe7bfc","Type":"ContainerStarted","Data":"a84681fa37d45c4925f780e8954023bd4c066ed1cbb2bb7d3fe3e2f3209e4c8b"} Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.036810 5012 generic.go:334] "Generic (PLEG): container finished" podID="8c63064a-a5f1-48da-b11c-eb76b04e3397" containerID="cfe7e53a61fb5256f22c4a39c4ac5b0bf7cc2f1ccf28f2709694c6b3715b8d0c" exitCode=0 Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.037113 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gfhmj" event={"ID":"8c63064a-a5f1-48da-b11c-eb76b04e3397","Type":"ContainerDied","Data":"cfe7e53a61fb5256f22c4a39c4ac5b0bf7cc2f1ccf28f2709694c6b3715b8d0c"} Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.037171 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gfhmj" event={"ID":"8c63064a-a5f1-48da-b11c-eb76b04e3397","Type":"ContainerStarted","Data":"92bd8d00206aa24501f4ff93ff9ed472e42ea7a5e3069017ed2ab2dec8bfa7db"} Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.039622 5012 generic.go:334] "Generic (PLEG): container finished" podID="cd86f802-eef3-479a-870a-e34e7ce028ba" containerID="a20a059012a07fc06fff87153b7822f281e937cfbfdfbad5c4e4671c1d2bfb30" exitCode=0 Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.040084 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c723-account-create-update-n6sg9" event={"ID":"cd86f802-eef3-479a-870a-e34e7ce028ba","Type":"ContainerDied","Data":"a20a059012a07fc06fff87153b7822f281e937cfbfdfbad5c4e4671c1d2bfb30"} Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.040123 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c723-account-create-update-n6sg9" event={"ID":"cd86f802-eef3-479a-870a-e34e7ce028ba","Type":"ContainerStarted","Data":"ffe1af5d39043f27e616cee7a2931ec77e266d5fd98f4e7902bda884efb795d5"} Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.041711 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-cdj57" event={"ID":"89f14c4e-147e-4a05-a8d9-63b93aaad4a4","Type":"ContainerStarted","Data":"416ab3f77df9ec5ea4c8eb669473dd003dd38b711f8cc41ad525b40979b07e19"} Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.046690 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b53da41-1ee4-4a06-b1ad-2f689fafd2be" (UID: "0b53da41-1ee4-4a06-b1ad-2f689fafd2be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.056178 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0b53da41-1ee4-4a06-b1ad-2f689fafd2be" (UID: "0b53da41-1ee4-4a06-b1ad-2f689fafd2be"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.073571 5012 generic.go:334] "Generic (PLEG): container finished" podID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" containerID="7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d" exitCode=0 Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.073984 5012 generic.go:334] "Generic (PLEG): container finished" podID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" containerID="81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e" exitCode=143 Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.073661 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b53da41-1ee4-4a06-b1ad-2f689fafd2be","Type":"ContainerDied","Data":"7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d"} Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.074058 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b53da41-1ee4-4a06-b1ad-2f689fafd2be","Type":"ContainerDied","Data":"81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e"} Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.074088 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0b53da41-1ee4-4a06-b1ad-2f689fafd2be","Type":"ContainerDied","Data":"d5e045aceaaad28fe4dee87429ebb210f9d9b506f56cee1ed148bccaa4202c45"} Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.073702 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.074107 5012 scope.go:117] "RemoveContainer" containerID="7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.100163 5012 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.100211 5012 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.100226 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.100986 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-config-data" (OuterVolumeSpecName: "config-data") pod "0b53da41-1ee4-4a06-b1ad-2f689fafd2be" (UID: "0b53da41-1ee4-4a06-b1ad-2f689fafd2be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.196660 5012 scope.go:117] "RemoveContainer" containerID="81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.203612 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b53da41-1ee4-4a06-b1ad-2f689fafd2be-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.283989 5012 scope.go:117] "RemoveContainer" containerID="7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d" Feb 19 05:42:54 crc kubenswrapper[5012]: E0219 05:42:54.285199 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d\": container with ID starting with 7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d not found: ID does not exist" containerID="7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.285241 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d"} err="failed to get container status \"7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d\": rpc error: code = NotFound desc = could not find container \"7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d\": container with ID starting with 7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d not found: ID does not exist" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.285266 5012 scope.go:117] "RemoveContainer" containerID="81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e" Feb 19 05:42:54 crc kubenswrapper[5012]: E0219 05:42:54.286784 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e\": container with ID starting with 81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e not found: ID does not exist" containerID="81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.286811 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e"} err="failed to get container status \"81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e\": rpc error: code = NotFound desc = could not find container \"81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e\": container with ID starting with 81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e not found: ID does not exist" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.286856 5012 scope.go:117] "RemoveContainer" containerID="7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.287850 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d"} err="failed to get container status \"7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d\": rpc error: code = NotFound desc = could not find container \"7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d\": container with ID starting with 7319f97be428e5262b3d538f21510db7227fa153decc4b9ad8d1cd2f8e11ff5d not found: ID does not exist" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.287890 5012 scope.go:117] "RemoveContainer" containerID="81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.288913 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e"} err="failed to get container status \"81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e\": rpc error: code = NotFound desc = could not find container \"81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e\": container with ID starting with 81160b9578642b86fb44a442aca116284b9c7774c4a8636935025ebca410215e not found: ID does not exist" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.442242 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.464361 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.478942 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:42:54 crc kubenswrapper[5012]: E0219 05:42:54.487753 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" containerName="glance-httpd" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.487789 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" containerName="glance-httpd" Feb 19 05:42:54 crc kubenswrapper[5012]: E0219 05:42:54.487839 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" containerName="glance-log" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.487847 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" containerName="glance-log" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.488110 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" containerName="glance-log" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.488142 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" containerName="glance-httpd" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.489159 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.493431 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.496726 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.508356 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:42:54 crc kubenswrapper[5012]: W0219 05:42:54.560769 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88a90a35_c893_4857_9f8b_9a405c96c044.slice/crio-ecf3b5a274f4488d8ab70a5b1867720561b8843e1c2bc81491a836b7a8a78bb9 WatchSource:0}: Error finding container ecf3b5a274f4488d8ab70a5b1867720561b8843e1c2bc81491a836b7a8a78bb9: Status 404 returned error can't find the container with id ecf3b5a274f4488d8ab70a5b1867720561b8843e1c2bc81491a836b7a8a78bb9 Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.568366 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.621196 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.621250 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.621288 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.621330 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.621398 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vglb8\" (UniqueName: \"kubernetes.io/projected/eb53e400-f5d7-4c86-9aab-eda61301a4cf-kube-api-access-vglb8\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.621415 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.621441 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.621466 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-logs\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.728383 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.728473 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-logs\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.728547 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.728570 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.728609 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.728628 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.728698 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vglb8\" (UniqueName: \"kubernetes.io/projected/eb53e400-f5d7-4c86-9aab-eda61301a4cf-kube-api-access-vglb8\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.728720 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.728907 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.735780 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-logs\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.746957 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.753430 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.758431 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.759123 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.764338 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.782234 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b53da41-1ee4-4a06-b1ad-2f689fafd2be" path="/var/lib/kubelet/pods/0b53da41-1ee4-4a06-b1ad-2f689fafd2be/volumes" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.783052 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13bff5bd-2005-4cce-986a-5bcd2d5a396c" path="/var/lib/kubelet/pods/13bff5bd-2005-4cce-986a-5bcd2d5a396c/volumes" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.802556 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:54 crc kubenswrapper[5012]: I0219 05:42:54.805086 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vglb8\" (UniqueName: \"kubernetes.io/projected/eb53e400-f5d7-4c86-9aab-eda61301a4cf-kube-api-access-vglb8\") pod \"glance-default-internal-api-0\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:42:55 crc kubenswrapper[5012]: I0219 05:42:55.097426 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"88a90a35-c893-4857-9f8b-9a405c96c044","Type":"ContainerStarted","Data":"ecf3b5a274f4488d8ab70a5b1867720561b8843e1c2bc81491a836b7a8a78bb9"} Feb 19 05:42:55 crc kubenswrapper[5012]: I0219 05:42:55.113329 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:42:55 crc kubenswrapper[5012]: I0219 05:42:55.891131 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-855998b9f9-lkm6w"] Feb 19 05:42:55 crc kubenswrapper[5012]: I0219 05:42:55.911744 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:42:55 crc kubenswrapper[5012]: I0219 05:42:55.958522 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6cdcb467fb-8tvnz"] Feb 19 05:42:55 crc kubenswrapper[5012]: I0219 05:42:55.969354 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:55 crc kubenswrapper[5012]: I0219 05:42:55.972028 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 19 05:42:55 crc kubenswrapper[5012]: I0219 05:42:55.975255 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cdcb467fb-8tvnz"] Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.021537 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.074572 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6c937bbe-f068-4e5b-81ad-9455104062da-horizon-secret-key\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.074624 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c937bbe-f068-4e5b-81ad-9455104062da-scripts\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.074651 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c937bbe-f068-4e5b-81ad-9455104062da-horizon-tls-certs\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.074692 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c937bbe-f068-4e5b-81ad-9455104062da-logs\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.074714 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqq6d\" (UniqueName: \"kubernetes.io/projected/6c937bbe-f068-4e5b-81ad-9455104062da-kube-api-access-xqq6d\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.074757 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c937bbe-f068-4e5b-81ad-9455104062da-combined-ca-bundle\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.074775 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c937bbe-f068-4e5b-81ad-9455104062da-config-data\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.118975 5012 generic.go:334] "Generic (PLEG): container finished" podID="25558255-c27f-4f6e-a838-675ae8ec77b6" containerID="12a292fc1b8e4523fdc0fb30ca3590a1b6b6f0c70c3e42e076f92a7b213241f2" exitCode=0 Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.119019 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kqsgz" event={"ID":"25558255-c27f-4f6e-a838-675ae8ec77b6","Type":"ContainerDied","Data":"12a292fc1b8e4523fdc0fb30ca3590a1b6b6f0c70c3e42e076f92a7b213241f2"} Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.178771 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c937bbe-f068-4e5b-81ad-9455104062da-logs\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.178843 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqq6d\" (UniqueName: \"kubernetes.io/projected/6c937bbe-f068-4e5b-81ad-9455104062da-kube-api-access-xqq6d\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.178898 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c937bbe-f068-4e5b-81ad-9455104062da-combined-ca-bundle\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.178924 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c937bbe-f068-4e5b-81ad-9455104062da-config-data\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.179002 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6c937bbe-f068-4e5b-81ad-9455104062da-horizon-secret-key\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.179030 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c937bbe-f068-4e5b-81ad-9455104062da-scripts\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.179054 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c937bbe-f068-4e5b-81ad-9455104062da-horizon-tls-certs\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.180167 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c937bbe-f068-4e5b-81ad-9455104062da-logs\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.181316 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c937bbe-f068-4e5b-81ad-9455104062da-config-data\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.181537 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c937bbe-f068-4e5b-81ad-9455104062da-scripts\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.195798 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6c937bbe-f068-4e5b-81ad-9455104062da-horizon-secret-key\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.196812 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c937bbe-f068-4e5b-81ad-9455104062da-combined-ca-bundle\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.204323 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c937bbe-f068-4e5b-81ad-9455104062da-horizon-tls-certs\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.210508 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqq6d\" (UniqueName: \"kubernetes.io/projected/6c937bbe-f068-4e5b-81ad-9455104062da-kube-api-access-xqq6d\") pod \"horizon-6cdcb467fb-8tvnz\" (UID: \"6c937bbe-f068-4e5b-81ad-9455104062da\") " pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:56 crc kubenswrapper[5012]: I0219 05:42:56.307951 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:42:57 crc kubenswrapper[5012]: I0219 05:42:57.812583 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:42:57 crc kubenswrapper[5012]: I0219 05:42:57.882360 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bf9dcd95-lzm7b"] Feb 19 05:42:57 crc kubenswrapper[5012]: I0219 05:42:57.882585 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" podUID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerName="dnsmasq-dns" containerID="cri-o://7ff9e9710973d65273f4c7d1b2b07184b8147f2ccbf37eac212553af6a1fa77e" gracePeriod=10 Feb 19 05:42:58 crc kubenswrapper[5012]: I0219 05:42:58.170891 5012 generic.go:334] "Generic (PLEG): container finished" podID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerID="7ff9e9710973d65273f4c7d1b2b07184b8147f2ccbf37eac212553af6a1fa77e" exitCode=0 Feb 19 05:42:58 crc kubenswrapper[5012]: I0219 05:42:58.170939 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" event={"ID":"f22ec0c5-41a9-4f36-adb0-405e5a26d209","Type":"ContainerDied","Data":"7ff9e9710973d65273f4c7d1b2b07184b8147f2ccbf37eac212553af6a1fa77e"} Feb 19 05:43:02 crc kubenswrapper[5012]: I0219 05:43:02.859895 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" podUID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: connect: connection refused" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.500444 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c723-account-create-update-n6sg9" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.506575 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.516630 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gfhmj" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.689256 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd86f802-eef3-479a-870a-e34e7ce028ba-operator-scripts\") pod \"cd86f802-eef3-479a-870a-e34e7ce028ba\" (UID: \"cd86f802-eef3-479a-870a-e34e7ce028ba\") " Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.689342 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rhkv\" (UniqueName: \"kubernetes.io/projected/cd86f802-eef3-479a-870a-e34e7ce028ba-kube-api-access-8rhkv\") pod \"cd86f802-eef3-479a-870a-e34e7ce028ba\" (UID: \"cd86f802-eef3-479a-870a-e34e7ce028ba\") " Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.689448 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7hsh\" (UniqueName: \"kubernetes.io/projected/25558255-c27f-4f6e-a838-675ae8ec77b6-kube-api-access-d7hsh\") pod \"25558255-c27f-4f6e-a838-675ae8ec77b6\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.689640 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-config-data\") pod \"25558255-c27f-4f6e-a838-675ae8ec77b6\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.689676 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-credential-keys\") pod \"25558255-c27f-4f6e-a838-675ae8ec77b6\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.689778 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwnxp\" (UniqueName: \"kubernetes.io/projected/8c63064a-a5f1-48da-b11c-eb76b04e3397-kube-api-access-fwnxp\") pod \"8c63064a-a5f1-48da-b11c-eb76b04e3397\" (UID: \"8c63064a-a5f1-48da-b11c-eb76b04e3397\") " Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.689860 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-combined-ca-bundle\") pod \"25558255-c27f-4f6e-a838-675ae8ec77b6\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.689888 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-fernet-keys\") pod \"25558255-c27f-4f6e-a838-675ae8ec77b6\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.689948 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c63064a-a5f1-48da-b11c-eb76b04e3397-operator-scripts\") pod \"8c63064a-a5f1-48da-b11c-eb76b04e3397\" (UID: \"8c63064a-a5f1-48da-b11c-eb76b04e3397\") " Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.690013 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-scripts\") pod \"25558255-c27f-4f6e-a838-675ae8ec77b6\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.690681 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd86f802-eef3-479a-870a-e34e7ce028ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cd86f802-eef3-479a-870a-e34e7ce028ba" (UID: "cd86f802-eef3-479a-870a-e34e7ce028ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.696020 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c63064a-a5f1-48da-b11c-eb76b04e3397-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c63064a-a5f1-48da-b11c-eb76b04e3397" (UID: "8c63064a-a5f1-48da-b11c-eb76b04e3397"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.696357 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "25558255-c27f-4f6e-a838-675ae8ec77b6" (UID: "25558255-c27f-4f6e-a838-675ae8ec77b6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.696684 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "25558255-c27f-4f6e-a838-675ae8ec77b6" (UID: "25558255-c27f-4f6e-a838-675ae8ec77b6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.698165 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25558255-c27f-4f6e-a838-675ae8ec77b6-kube-api-access-d7hsh" (OuterVolumeSpecName: "kube-api-access-d7hsh") pod "25558255-c27f-4f6e-a838-675ae8ec77b6" (UID: "25558255-c27f-4f6e-a838-675ae8ec77b6"). InnerVolumeSpecName "kube-api-access-d7hsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.712867 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-scripts" (OuterVolumeSpecName: "scripts") pod "25558255-c27f-4f6e-a838-675ae8ec77b6" (UID: "25558255-c27f-4f6e-a838-675ae8ec77b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.712930 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c63064a-a5f1-48da-b11c-eb76b04e3397-kube-api-access-fwnxp" (OuterVolumeSpecName: "kube-api-access-fwnxp") pod "8c63064a-a5f1-48da-b11c-eb76b04e3397" (UID: "8c63064a-a5f1-48da-b11c-eb76b04e3397"). InnerVolumeSpecName "kube-api-access-fwnxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:07 crc kubenswrapper[5012]: E0219 05:43:07.716609 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-config-data podName:25558255-c27f-4f6e-a838-675ae8ec77b6 nodeName:}" failed. No retries permitted until 2026-02-19 05:43:08.216577245 +0000 UTC m=+1084.249899814 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-config-data") pod "25558255-c27f-4f6e-a838-675ae8ec77b6" (UID: "25558255-c27f-4f6e-a838-675ae8ec77b6") : error deleting /var/lib/kubelet/pods/25558255-c27f-4f6e-a838-675ae8ec77b6/volume-subpaths: remove /var/lib/kubelet/pods/25558255-c27f-4f6e-a838-675ae8ec77b6/volume-subpaths: no such file or directory Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.721232 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25558255-c27f-4f6e-a838-675ae8ec77b6" (UID: "25558255-c27f-4f6e-a838-675ae8ec77b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.726597 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd86f802-eef3-479a-870a-e34e7ce028ba-kube-api-access-8rhkv" (OuterVolumeSpecName: "kube-api-access-8rhkv") pod "cd86f802-eef3-479a-870a-e34e7ce028ba" (UID: "cd86f802-eef3-479a-870a-e34e7ce028ba"). InnerVolumeSpecName "kube-api-access-8rhkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.792764 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwnxp\" (UniqueName: \"kubernetes.io/projected/8c63064a-a5f1-48da-b11c-eb76b04e3397-kube-api-access-fwnxp\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.792804 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.792814 5012 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.792824 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c63064a-a5f1-48da-b11c-eb76b04e3397-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.792835 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.792844 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd86f802-eef3-479a-870a-e34e7ce028ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.792852 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rhkv\" (UniqueName: \"kubernetes.io/projected/cd86f802-eef3-479a-870a-e34e7ce028ba-kube-api-access-8rhkv\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.792860 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7hsh\" (UniqueName: \"kubernetes.io/projected/25558255-c27f-4f6e-a838-675ae8ec77b6-kube-api-access-d7hsh\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:07 crc kubenswrapper[5012]: I0219 05:43:07.792868 5012 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.286999 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c723-account-create-update-n6sg9" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.286996 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c723-account-create-update-n6sg9" event={"ID":"cd86f802-eef3-479a-870a-e34e7ce028ba","Type":"ContainerDied","Data":"ffe1af5d39043f27e616cee7a2931ec77e266d5fd98f4e7902bda884efb795d5"} Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.287571 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffe1af5d39043f27e616cee7a2931ec77e266d5fd98f4e7902bda884efb795d5" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.289728 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kqsgz" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.289746 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kqsgz" event={"ID":"25558255-c27f-4f6e-a838-675ae8ec77b6","Type":"ContainerDied","Data":"1512f5c39e8f19ace9b3040d9e0b560368c4be80736d22a497fc8bc26c80da61"} Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.289786 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1512f5c39e8f19ace9b3040d9e0b560368c4be80736d22a497fc8bc26c80da61" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.292259 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"88a90a35-c893-4857-9f8b-9a405c96c044","Type":"ContainerStarted","Data":"ac6da210c5bb5413e246a1f52c04e83d2fc86480ea465052b466d052aefc08b7"} Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.294480 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gfhmj" event={"ID":"8c63064a-a5f1-48da-b11c-eb76b04e3397","Type":"ContainerDied","Data":"92bd8d00206aa24501f4ff93ff9ed472e42ea7a5e3069017ed2ab2dec8bfa7db"} Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.294538 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92bd8d00206aa24501f4ff93ff9ed472e42ea7a5e3069017ed2ab2dec8bfa7db" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.294619 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gfhmj" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.303820 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-config-data\") pod \"25558255-c27f-4f6e-a838-675ae8ec77b6\" (UID: \"25558255-c27f-4f6e-a838-675ae8ec77b6\") " Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.309547 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-config-data" (OuterVolumeSpecName: "config-data") pod "25558255-c27f-4f6e-a838-675ae8ec77b6" (UID: "25558255-c27f-4f6e-a838-675ae8ec77b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.405974 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25558255-c27f-4f6e-a838-675ae8ec77b6-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.640363 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-kqsgz"] Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.674109 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-kqsgz"] Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.724512 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25558255-c27f-4f6e-a838-675ae8ec77b6" path="/var/lib/kubelet/pods/25558255-c27f-4f6e-a838-675ae8ec77b6/volumes" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.729725 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zf89d"] Feb 19 05:43:08 crc kubenswrapper[5012]: E0219 05:43:08.730061 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25558255-c27f-4f6e-a838-675ae8ec77b6" containerName="keystone-bootstrap" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.730077 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="25558255-c27f-4f6e-a838-675ae8ec77b6" containerName="keystone-bootstrap" Feb 19 05:43:08 crc kubenswrapper[5012]: E0219 05:43:08.730112 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd86f802-eef3-479a-870a-e34e7ce028ba" containerName="mariadb-account-create-update" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.730119 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd86f802-eef3-479a-870a-e34e7ce028ba" containerName="mariadb-account-create-update" Feb 19 05:43:08 crc kubenswrapper[5012]: E0219 05:43:08.730133 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c63064a-a5f1-48da-b11c-eb76b04e3397" containerName="mariadb-database-create" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.730140 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c63064a-a5f1-48da-b11c-eb76b04e3397" containerName="mariadb-database-create" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.730366 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c63064a-a5f1-48da-b11c-eb76b04e3397" containerName="mariadb-database-create" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.730392 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="25558255-c27f-4f6e-a838-675ae8ec77b6" containerName="keystone-bootstrap" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.730403 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd86f802-eef3-479a-870a-e34e7ce028ba" containerName="mariadb-account-create-update" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.730990 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.737709 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.737874 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.738003 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.738155 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dhq72" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.742613 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.796660 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zf89d"] Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.927595 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-config-data\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.928337 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-fernet-keys\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.928607 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-credential-keys\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.929314 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nknw\" (UniqueName: \"kubernetes.io/projected/555a6373-5cdf-490e-b6ea-b0fb55425d28-kube-api-access-2nknw\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.929411 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-combined-ca-bundle\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:08 crc kubenswrapper[5012]: I0219 05:43:08.929473 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-scripts\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.032411 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-combined-ca-bundle\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.032482 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-scripts\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.032559 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-config-data\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.032603 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-fernet-keys\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.032666 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-credential-keys\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.032775 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nknw\" (UniqueName: \"kubernetes.io/projected/555a6373-5cdf-490e-b6ea-b0fb55425d28-kube-api-access-2nknw\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.038226 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-fernet-keys\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.038835 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-combined-ca-bundle\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.039739 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-config-data\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.041828 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-scripts\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.042082 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-credential-keys\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.049252 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nknw\" (UniqueName: \"kubernetes.io/projected/555a6373-5cdf-490e-b6ea-b0fb55425d28-kube-api-access-2nknw\") pod \"keystone-bootstrap-zf89d\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:09 crc kubenswrapper[5012]: I0219 05:43:09.078643 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.680552 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-px7xk"] Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.682124 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.692416 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-combined-ca-bundle\") pod \"neutron-db-sync-px7xk\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.692551 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s87kc\" (UniqueName: \"kubernetes.io/projected/787f8a71-dee4-40d2-b33b-85bcfc58f921-kube-api-access-s87kc\") pod \"neutron-db-sync-px7xk\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.692582 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-config\") pod \"neutron-db-sync-px7xk\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.693036 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-px7xk"] Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.693613 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rtrj8" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.693695 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.693963 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.795033 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-combined-ca-bundle\") pod \"neutron-db-sync-px7xk\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.795397 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s87kc\" (UniqueName: \"kubernetes.io/projected/787f8a71-dee4-40d2-b33b-85bcfc58f921-kube-api-access-s87kc\") pod \"neutron-db-sync-px7xk\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.795458 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-config\") pod \"neutron-db-sync-px7xk\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.806814 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-config\") pod \"neutron-db-sync-px7xk\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.813003 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-combined-ca-bundle\") pod \"neutron-db-sync-px7xk\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:11 crc kubenswrapper[5012]: I0219 05:43:11.817552 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s87kc\" (UniqueName: \"kubernetes.io/projected/787f8a71-dee4-40d2-b33b-85bcfc58f921-kube-api-access-s87kc\") pod \"neutron-db-sync-px7xk\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:12 crc kubenswrapper[5012]: I0219 05:43:12.002589 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-px7xk" Feb 19 05:43:12 crc kubenswrapper[5012]: I0219 05:43:12.861016 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" podUID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Feb 19 05:43:14 crc kubenswrapper[5012]: I0219 05:43:14.431997 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:43:14 crc kubenswrapper[5012]: I0219 05:43:14.432660 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:43:15 crc kubenswrapper[5012]: E0219 05:43:15.587021 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-horizon:current" Feb 19 05:43:15 crc kubenswrapper[5012]: E0219 05:43:15.587685 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-horizon:current" Feb 19 05:43:15 crc kubenswrapper[5012]: E0219 05:43:15.587912 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.rdoproject.org/podified-master-centos10/openstack-horizon:current,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf4hcdh54h585h5b4hfbh667h5dch5d4h85hf9h5dh8dh64hddh676hdfh575h56dh699h5cbhd6hfdh589h5bdh5f6hddh569h549h87h59dh557q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdkkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-855998b9f9-lkm6w_openstack(f06c7918-a7b3-4041-bd16-63a73e47bf13): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:43:15 crc kubenswrapper[5012]: E0219 05:43:15.594740 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-horizon:current\\\"\"]" pod="openstack/horizon-855998b9f9-lkm6w" podUID="f06c7918-a7b3-4041-bd16-63a73e47bf13" Feb 19 05:43:15 crc kubenswrapper[5012]: E0219 05:43:15.607608 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-horizon:current" Feb 19 05:43:15 crc kubenswrapper[5012]: E0219 05:43:15.607700 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-horizon:current" Feb 19 05:43:15 crc kubenswrapper[5012]: E0219 05:43:15.607926 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.rdoproject.org/podified-master-centos10/openstack-horizon:current,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67bh5ch5ffh9bh7h67bh57h59ch5c7h59bh5b9h5ffh647h694h5ffhf5h5fh677hbh575h587hcfh589h66bh55dh594h547h77h68fh695h559h566q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9stt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-56f66dc579-dpndj_openstack(cb1825de-9782-4820-96aa-d4909a0f7820): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:43:15 crc kubenswrapper[5012]: E0219 05:43:15.612334 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-horizon:current\\\"\"]" pod="openstack/horizon-56f66dc579-dpndj" podUID="cb1825de-9782-4820-96aa-d4909a0f7820" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.680512 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.807330 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-config\") pod \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.807667 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-dns-svc\") pod \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.807761 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-nb\") pod \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.807833 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-sb\") pod \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.807874 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25brm\" (UniqueName: \"kubernetes.io/projected/f22ec0c5-41a9-4f36-adb0-405e5a26d209-kube-api-access-25brm\") pod \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\" (UID: \"f22ec0c5-41a9-4f36-adb0-405e5a26d209\") " Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.815987 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f22ec0c5-41a9-4f36-adb0-405e5a26d209-kube-api-access-25brm" (OuterVolumeSpecName: "kube-api-access-25brm") pod "f22ec0c5-41a9-4f36-adb0-405e5a26d209" (UID: "f22ec0c5-41a9-4f36-adb0-405e5a26d209"). InnerVolumeSpecName "kube-api-access-25brm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.866435 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-config" (OuterVolumeSpecName: "config") pod "f22ec0c5-41a9-4f36-adb0-405e5a26d209" (UID: "f22ec0c5-41a9-4f36-adb0-405e5a26d209"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.867749 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f22ec0c5-41a9-4f36-adb0-405e5a26d209" (UID: "f22ec0c5-41a9-4f36-adb0-405e5a26d209"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.874474 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f22ec0c5-41a9-4f36-adb0-405e5a26d209" (UID: "f22ec0c5-41a9-4f36-adb0-405e5a26d209"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.896402 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f22ec0c5-41a9-4f36-adb0-405e5a26d209" (UID: "f22ec0c5-41a9-4f36-adb0-405e5a26d209"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.912337 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.912402 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.912421 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25brm\" (UniqueName: \"kubernetes.io/projected/f22ec0c5-41a9-4f36-adb0-405e5a26d209-kube-api-access-25brm\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.912467 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:15 crc kubenswrapper[5012]: I0219 05:43:15.912483 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f22ec0c5-41a9-4f36-adb0-405e5a26d209-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:16 crc kubenswrapper[5012]: I0219 05:43:16.403454 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" Feb 19 05:43:16 crc kubenswrapper[5012]: I0219 05:43:16.403923 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" event={"ID":"f22ec0c5-41a9-4f36-adb0-405e5a26d209","Type":"ContainerDied","Data":"6bd9704878ce796ee545aeab88709706e42c6cb9f878bf8b26a1785cb4cf93bf"} Feb 19 05:43:16 crc kubenswrapper[5012]: I0219 05:43:16.404488 5012 scope.go:117] "RemoveContainer" containerID="7ff9e9710973d65273f4c7d1b2b07184b8147f2ccbf37eac212553af6a1fa77e" Feb 19 05:43:16 crc kubenswrapper[5012]: I0219 05:43:16.493939 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bf9dcd95-lzm7b"] Feb 19 05:43:16 crc kubenswrapper[5012]: I0219 05:43:16.502895 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bf9dcd95-lzm7b"] Feb 19 05:43:16 crc kubenswrapper[5012]: I0219 05:43:16.730246 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" path="/var/lib/kubelet/pods/f22ec0c5-41a9-4f36-adb0-405e5a26d209/volumes" Feb 19 05:43:17 crc kubenswrapper[5012]: I0219 05:43:17.862461 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79bf9dcd95-lzm7b" podUID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Feb 19 05:43:24 crc kubenswrapper[5012]: E0219 05:43:24.831741 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-watcher-api:current" Feb 19 05:43:24 crc kubenswrapper[5012]: E0219 05:43:24.832812 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-watcher-api:current" Feb 19 05:43:24 crc kubenswrapper[5012]: E0219 05:43:24.833430 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-watcher-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fq87n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-cdj57_openstack(89f14c4e-147e-4a05-a8d9-63b93aaad4a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:43:24 crc kubenswrapper[5012]: E0219 05:43:24.834664 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-cdj57" podUID="89f14c4e-147e-4a05-a8d9-63b93aaad4a4" Feb 19 05:43:25 crc kubenswrapper[5012]: E0219 05:43:25.307187 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-barbican-api:current" Feb 19 05:43:25 crc kubenswrapper[5012]: E0219 05:43:25.307236 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-barbican-api:current" Feb 19 05:43:25 crc kubenswrapper[5012]: E0219 05:43:25.307409 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-barbican-api:current,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z95mm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-jzclm_openstack(a34a979c-9102-471f-9678-048fd5198cb8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:43:25 crc kubenswrapper[5012]: E0219 05:43:25.308623 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-jzclm" podUID="a34a979c-9102-471f-9678-048fd5198cb8" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.441242 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.450832 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.535509 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56f66dc579-dpndj" event={"ID":"cb1825de-9782-4820-96aa-d4909a0f7820","Type":"ContainerDied","Data":"9f3f55ad97ef9c22bb96987a2cbaf0c250aabbcc040a9c414cfd11f3987fe5ea"} Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.535728 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56f66dc579-dpndj" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.540424 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-855998b9f9-lkm6w" event={"ID":"f06c7918-a7b3-4041-bd16-63a73e47bf13","Type":"ContainerDied","Data":"fe94aacdf3b8c844dc9abab1e415854e4e47ea212d07513a67fc4c1411f63f3f"} Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.540515 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-855998b9f9-lkm6w" Feb 19 05:43:25 crc kubenswrapper[5012]: E0219 05:43:25.542259 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-barbican-api:current\\\"\"" pod="openstack/barbican-db-sync-jzclm" podUID="a34a979c-9102-471f-9678-048fd5198cb8" Feb 19 05:43:25 crc kubenswrapper[5012]: E0219 05:43:25.542529 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-watcher-api:current\\\"\"" pod="openstack/watcher-db-sync-cdj57" podUID="89f14c4e-147e-4a05-a8d9-63b93aaad4a4" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.565231 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdkkv\" (UniqueName: \"kubernetes.io/projected/f06c7918-a7b3-4041-bd16-63a73e47bf13-kube-api-access-rdkkv\") pod \"f06c7918-a7b3-4041-bd16-63a73e47bf13\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.565712 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f06c7918-a7b3-4041-bd16-63a73e47bf13-logs\") pod \"f06c7918-a7b3-4041-bd16-63a73e47bf13\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.565938 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f06c7918-a7b3-4041-bd16-63a73e47bf13-horizon-secret-key\") pod \"f06c7918-a7b3-4041-bd16-63a73e47bf13\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.566195 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb1825de-9782-4820-96aa-d4909a0f7820-logs\") pod \"cb1825de-9782-4820-96aa-d4909a0f7820\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.566366 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f06c7918-a7b3-4041-bd16-63a73e47bf13-logs" (OuterVolumeSpecName: "logs") pod "f06c7918-a7b3-4041-bd16-63a73e47bf13" (UID: "f06c7918-a7b3-4041-bd16-63a73e47bf13"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.566485 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-config-data\") pod \"f06c7918-a7b3-4041-bd16-63a73e47bf13\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.566776 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb1825de-9782-4820-96aa-d4909a0f7820-logs" (OuterVolumeSpecName: "logs") pod "cb1825de-9782-4820-96aa-d4909a0f7820" (UID: "cb1825de-9782-4820-96aa-d4909a0f7820"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.566789 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-config-data\") pod \"cb1825de-9782-4820-96aa-d4909a0f7820\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.567292 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-scripts\") pod \"cb1825de-9782-4820-96aa-d4909a0f7820\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.568960 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cb1825de-9782-4820-96aa-d4909a0f7820-horizon-secret-key\") pod \"cb1825de-9782-4820-96aa-d4909a0f7820\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.569066 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9stt\" (UniqueName: \"kubernetes.io/projected/cb1825de-9782-4820-96aa-d4909a0f7820-kube-api-access-p9stt\") pod \"cb1825de-9782-4820-96aa-d4909a0f7820\" (UID: \"cb1825de-9782-4820-96aa-d4909a0f7820\") " Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.569153 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-scripts\") pod \"f06c7918-a7b3-4041-bd16-63a73e47bf13\" (UID: \"f06c7918-a7b3-4041-bd16-63a73e47bf13\") " Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.567765 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-config-data" (OuterVolumeSpecName: "config-data") pod "f06c7918-a7b3-4041-bd16-63a73e47bf13" (UID: "f06c7918-a7b3-4041-bd16-63a73e47bf13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.567760 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-config-data" (OuterVolumeSpecName: "config-data") pod "cb1825de-9782-4820-96aa-d4909a0f7820" (UID: "cb1825de-9782-4820-96aa-d4909a0f7820"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.568044 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-scripts" (OuterVolumeSpecName: "scripts") pod "cb1825de-9782-4820-96aa-d4909a0f7820" (UID: "cb1825de-9782-4820-96aa-d4909a0f7820"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.570208 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-scripts" (OuterVolumeSpecName: "scripts") pod "f06c7918-a7b3-4041-bd16-63a73e47bf13" (UID: "f06c7918-a7b3-4041-bd16-63a73e47bf13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.571461 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb1825de-9782-4820-96aa-d4909a0f7820-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.571648 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.572038 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.572099 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb1825de-9782-4820-96aa-d4909a0f7820-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.572166 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f06c7918-a7b3-4041-bd16-63a73e47bf13-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.572227 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f06c7918-a7b3-4041-bd16-63a73e47bf13-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.574520 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f06c7918-a7b3-4041-bd16-63a73e47bf13-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f06c7918-a7b3-4041-bd16-63a73e47bf13" (UID: "f06c7918-a7b3-4041-bd16-63a73e47bf13"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.574613 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb1825de-9782-4820-96aa-d4909a0f7820-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "cb1825de-9782-4820-96aa-d4909a0f7820" (UID: "cb1825de-9782-4820-96aa-d4909a0f7820"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.574800 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb1825de-9782-4820-96aa-d4909a0f7820-kube-api-access-p9stt" (OuterVolumeSpecName: "kube-api-access-p9stt") pod "cb1825de-9782-4820-96aa-d4909a0f7820" (UID: "cb1825de-9782-4820-96aa-d4909a0f7820"). InnerVolumeSpecName "kube-api-access-p9stt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.575489 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f06c7918-a7b3-4041-bd16-63a73e47bf13-kube-api-access-rdkkv" (OuterVolumeSpecName: "kube-api-access-rdkkv") pod "f06c7918-a7b3-4041-bd16-63a73e47bf13" (UID: "f06c7918-a7b3-4041-bd16-63a73e47bf13"). InnerVolumeSpecName "kube-api-access-rdkkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.674856 5012 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f06c7918-a7b3-4041-bd16-63a73e47bf13-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.674905 5012 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cb1825de-9782-4820-96aa-d4909a0f7820-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.674919 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9stt\" (UniqueName: \"kubernetes.io/projected/cb1825de-9782-4820-96aa-d4909a0f7820-kube-api-access-p9stt\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.674935 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdkkv\" (UniqueName: \"kubernetes.io/projected/f06c7918-a7b3-4041-bd16-63a73e47bf13-kube-api-access-rdkkv\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.915354 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56f66dc579-dpndj"] Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.934273 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-56f66dc579-dpndj"] Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.947569 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-855998b9f9-lkm6w"] Feb 19 05:43:25 crc kubenswrapper[5012]: I0219 05:43:25.953655 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-855998b9f9-lkm6w"] Feb 19 05:43:26 crc kubenswrapper[5012]: E0219 05:43:26.654993 5012 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cinder-api:current" Feb 19 05:43:26 crc kubenswrapper[5012]: E0219 05:43:26.655899 5012 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cinder-api:current" Feb 19 05:43:26 crc kubenswrapper[5012]: E0219 05:43:26.656152 5012 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cinder-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sghmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-xj7dw_openstack(b98c972c-b350-44a1-a7c5-028914fe7bfc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 05:43:26 crc kubenswrapper[5012]: E0219 05:43:26.657423 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-xj7dw" podUID="b98c972c-b350-44a1-a7c5-028914fe7bfc" Feb 19 05:43:26 crc kubenswrapper[5012]: I0219 05:43:26.662004 5012 scope.go:117] "RemoveContainer" containerID="d961f9b5d55a9bfaff596c3b756f78502ea40069f8fb1a18443be8e579f64c1b" Feb 19 05:43:26 crc kubenswrapper[5012]: I0219 05:43:26.731873 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb1825de-9782-4820-96aa-d4909a0f7820" path="/var/lib/kubelet/pods/cb1825de-9782-4820-96aa-d4909a0f7820/volumes" Feb 19 05:43:26 crc kubenswrapper[5012]: I0219 05:43:26.732634 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f06c7918-a7b3-4041-bd16-63a73e47bf13" path="/var/lib/kubelet/pods/f06c7918-a7b3-4041-bd16-63a73e47bf13/volumes" Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.204686 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-px7xk"] Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.213008 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cdcb467fb-8tvnz"] Feb 19 05:43:27 crc kubenswrapper[5012]: W0219 05:43:27.223596 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod787f8a71_dee4_40d2_b33b_85bcfc58f921.slice/crio-22e7c478cf5c3572f072dadd10797eb555b9b7702664f2f3d3e6b1d4af431e39 WatchSource:0}: Error finding container 22e7c478cf5c3572f072dadd10797eb555b9b7702664f2f3d3e6b1d4af431e39: Status 404 returned error can't find the container with id 22e7c478cf5c3572f072dadd10797eb555b9b7702664f2f3d3e6b1d4af431e39 Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.318320 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zf89d"] Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.350699 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.556724 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb53e400-f5d7-4c86-9aab-eda61301a4cf","Type":"ContainerStarted","Data":"ed8c2a32d5ff07698cb91058f40ee14be5a75fe90e647b1bf825aa951923980d"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.558796 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cdcb467fb-8tvnz" event={"ID":"6c937bbe-f068-4e5b-81ad-9455104062da","Type":"ContainerStarted","Data":"76c122b092d56fce3822adebfb83dea25f5d7b1dfa2c9ca1adcf7e290a003998"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.558853 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cdcb467fb-8tvnz" event={"ID":"6c937bbe-f068-4e5b-81ad-9455104062da","Type":"ContainerStarted","Data":"0e8ce8a183c403ce190eb750bca6315d62e08338a10355a944facbcaf4ffac73"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.560476 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"88a90a35-c893-4857-9f8b-9a405c96c044","Type":"ContainerStarted","Data":"080c5b8e7193f737b62e579122d8ee996388ee8f88c0a41d0bb8c0f53c6cdc33"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.560611 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="88a90a35-c893-4857-9f8b-9a405c96c044" containerName="glance-log" containerID="cri-o://ac6da210c5bb5413e246a1f52c04e83d2fc86480ea465052b466d052aefc08b7" gracePeriod=30 Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.560632 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="88a90a35-c893-4857-9f8b-9a405c96c044" containerName="glance-httpd" containerID="cri-o://080c5b8e7193f737b62e579122d8ee996388ee8f88c0a41d0bb8c0f53c6cdc33" gracePeriod=30 Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.564666 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-px7xk" event={"ID":"787f8a71-dee4-40d2-b33b-85bcfc58f921","Type":"ContainerStarted","Data":"c3b30cfc4d7788c5bf2800aec00271d7a398ee5903276843825107c74fa7f5b9"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.564702 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-px7xk" event={"ID":"787f8a71-dee4-40d2-b33b-85bcfc58f921","Type":"ContainerStarted","Data":"22e7c478cf5c3572f072dadd10797eb555b9b7702664f2f3d3e6b1d4af431e39"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.570612 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zf89d" event={"ID":"555a6373-5cdf-490e-b6ea-b0fb55425d28","Type":"ContainerStarted","Data":"9a5f9edac057b3de1965c26aac0927e9eaced35943e1b07d9b0176cc162f7fc5"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.570638 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zf89d" event={"ID":"555a6373-5cdf-490e-b6ea-b0fb55425d28","Type":"ContainerStarted","Data":"6d075631935c3f811666c0cc2948951facdd256267e93e94fef708b6510322b7"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.586744 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w9g6v" event={"ID":"be803869-4625-418d-bd39-bdbb4e6e0bfd","Type":"ContainerStarted","Data":"8659190e8633f7b88664c6c7e44927faf89d76ab66a53b4530e433a52d8c9664"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.589100 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=34.589080613 podStartE2EDuration="34.589080613s" podCreationTimestamp="2026-02-19 05:42:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:27.575713447 +0000 UTC m=+1103.609036016" watchObservedRunningTime="2026-02-19 05:43:27.589080613 +0000 UTC m=+1103.622403182" Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.597804 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c45b5647f-k799c" event={"ID":"d5eb71f6-31df-418a-98dd-11668ff38825","Type":"ContainerStarted","Data":"1740dd45d12f4fba32d28fe0edd137672168109214e3411aa79b0b01fe5420c4"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.597865 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c45b5647f-k799c" event={"ID":"d5eb71f6-31df-418a-98dd-11668ff38825","Type":"ContainerStarted","Data":"0edf70792244ac07bbfc8312a7939b51e2c1f6efdd9a9026a76bb21f0665c246"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.598022 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c45b5647f-k799c" podUID="d5eb71f6-31df-418a-98dd-11668ff38825" containerName="horizon-log" containerID="cri-o://0edf70792244ac07bbfc8312a7939b51e2c1f6efdd9a9026a76bb21f0665c246" gracePeriod=30 Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.598162 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c45b5647f-k799c" podUID="d5eb71f6-31df-418a-98dd-11668ff38825" containerName="horizon" containerID="cri-o://1740dd45d12f4fba32d28fe0edd137672168109214e3411aa79b0b01fe5420c4" gracePeriod=30 Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.602540 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zf89d" podStartSLOduration=19.602523051 podStartE2EDuration="19.602523051s" podCreationTimestamp="2026-02-19 05:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:27.594852818 +0000 UTC m=+1103.628175387" watchObservedRunningTime="2026-02-19 05:43:27.602523051 +0000 UTC m=+1103.635845620" Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.604266 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b","Type":"ContainerStarted","Data":"e454f72d42b6df4ccbea155823e52fa4dbc71ac17be418579910450da7af968d"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.613371 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-px7xk" podStartSLOduration=16.613352323 podStartE2EDuration="16.613352323s" podCreationTimestamp="2026-02-19 05:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:27.608690106 +0000 UTC m=+1103.642012665" watchObservedRunningTime="2026-02-19 05:43:27.613352323 +0000 UTC m=+1103.646674892" Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.613415 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75cc7d9585-x8r8l" event={"ID":"7c163961-185c-418b-a0f5-a4d55b59f3ec","Type":"ContainerStarted","Data":"55079917653f6fec11a6880998a2eb1b86a3b903487d3ecb0aa13cd966d7990e"} Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.613449 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75cc7d9585-x8r8l" event={"ID":"7c163961-185c-418b-a0f5-a4d55b59f3ec","Type":"ContainerStarted","Data":"3fcdc6a7de1157e87df26c6381be0f82492f8c4422bc5e6ab2f42667c4a696ee"} Feb 19 05:43:27 crc kubenswrapper[5012]: E0219 05:43:27.614037 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cinder-api:current\\\"\"" pod="openstack/cinder-db-sync-xj7dw" podUID="b98c972c-b350-44a1-a7c5-028914fe7bfc" Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.626540 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-w9g6v" podStartSLOduration=3.6619611389999998 podStartE2EDuration="40.626522574s" podCreationTimestamp="2026-02-19 05:42:47 +0000 UTC" firstStartedPulling="2026-02-19 05:42:48.338002479 +0000 UTC m=+1064.371325048" lastFinishedPulling="2026-02-19 05:43:25.302563904 +0000 UTC m=+1101.335886483" observedRunningTime="2026-02-19 05:43:27.623652942 +0000 UTC m=+1103.656975511" watchObservedRunningTime="2026-02-19 05:43:27.626522574 +0000 UTC m=+1103.659845143" Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.687092 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-75cc7d9585-x8r8l" podStartSLOduration=2.6878909 podStartE2EDuration="36.687069806s" podCreationTimestamp="2026-02-19 05:42:51 +0000 UTC" firstStartedPulling="2026-02-19 05:42:52.742332928 +0000 UTC m=+1068.775655497" lastFinishedPulling="2026-02-19 05:43:26.741511824 +0000 UTC m=+1102.774834403" observedRunningTime="2026-02-19 05:43:27.667012622 +0000 UTC m=+1103.700335191" watchObservedRunningTime="2026-02-19 05:43:27.687069806 +0000 UTC m=+1103.720392375" Feb 19 05:43:27 crc kubenswrapper[5012]: I0219 05:43:27.697864 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5c45b5647f-k799c" podStartSLOduration=4.294761211 podStartE2EDuration="40.697841687s" podCreationTimestamp="2026-02-19 05:42:47 +0000 UTC" firstStartedPulling="2026-02-19 05:42:48.960386609 +0000 UTC m=+1064.993709168" lastFinishedPulling="2026-02-19 05:43:25.363467075 +0000 UTC m=+1101.396789644" observedRunningTime="2026-02-19 05:43:27.651392039 +0000 UTC m=+1103.684714618" watchObservedRunningTime="2026-02-19 05:43:27.697841687 +0000 UTC m=+1103.731164256" Feb 19 05:43:28 crc kubenswrapper[5012]: I0219 05:43:28.140527 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:43:28 crc kubenswrapper[5012]: I0219 05:43:28.622632 5012 generic.go:334] "Generic (PLEG): container finished" podID="88a90a35-c893-4857-9f8b-9a405c96c044" containerID="080c5b8e7193f737b62e579122d8ee996388ee8f88c0a41d0bb8c0f53c6cdc33" exitCode=143 Feb 19 05:43:28 crc kubenswrapper[5012]: I0219 05:43:28.622685 5012 generic.go:334] "Generic (PLEG): container finished" podID="88a90a35-c893-4857-9f8b-9a405c96c044" containerID="ac6da210c5bb5413e246a1f52c04e83d2fc86480ea465052b466d052aefc08b7" exitCode=143 Feb 19 05:43:28 crc kubenswrapper[5012]: I0219 05:43:28.622693 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"88a90a35-c893-4857-9f8b-9a405c96c044","Type":"ContainerDied","Data":"080c5b8e7193f737b62e579122d8ee996388ee8f88c0a41d0bb8c0f53c6cdc33"} Feb 19 05:43:28 crc kubenswrapper[5012]: I0219 05:43:28.622765 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"88a90a35-c893-4857-9f8b-9a405c96c044","Type":"ContainerDied","Data":"ac6da210c5bb5413e246a1f52c04e83d2fc86480ea465052b466d052aefc08b7"} Feb 19 05:43:31 crc kubenswrapper[5012]: I0219 05:43:31.881749 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:43:31 crc kubenswrapper[5012]: I0219 05:43:31.882330 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:43:32 crc kubenswrapper[5012]: I0219 05:43:32.670348 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cdcb467fb-8tvnz" event={"ID":"6c937bbe-f068-4e5b-81ad-9455104062da","Type":"ContainerStarted","Data":"2584a3c6001f260f3a8f60bdf3e0d6ec9921502c46539b1bf34925a5b4a37ead"} Feb 19 05:43:32 crc kubenswrapper[5012]: I0219 05:43:32.680125 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb53e400-f5d7-4c86-9aab-eda61301a4cf","Type":"ContainerStarted","Data":"84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759"} Feb 19 05:43:32 crc kubenswrapper[5012]: I0219 05:43:32.691246 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6cdcb467fb-8tvnz" podStartSLOduration=37.691228878 podStartE2EDuration="37.691228878s" podCreationTimestamp="2026-02-19 05:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:32.688667294 +0000 UTC m=+1108.721989873" watchObservedRunningTime="2026-02-19 05:43:32.691228878 +0000 UTC m=+1108.724551437" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.122505 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.155284 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-public-tls-certs\") pod \"88a90a35-c893-4857-9f8b-9a405c96c044\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.155366 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-logs\") pod \"88a90a35-c893-4857-9f8b-9a405c96c044\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.155398 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"88a90a35-c893-4857-9f8b-9a405c96c044\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.155439 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-httpd-run\") pod \"88a90a35-c893-4857-9f8b-9a405c96c044\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.155471 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-combined-ca-bundle\") pod \"88a90a35-c893-4857-9f8b-9a405c96c044\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.155546 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-config-data\") pod \"88a90a35-c893-4857-9f8b-9a405c96c044\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.155613 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhpwq\" (UniqueName: \"kubernetes.io/projected/88a90a35-c893-4857-9f8b-9a405c96c044-kube-api-access-fhpwq\") pod \"88a90a35-c893-4857-9f8b-9a405c96c044\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.155640 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-scripts\") pod \"88a90a35-c893-4857-9f8b-9a405c96c044\" (UID: \"88a90a35-c893-4857-9f8b-9a405c96c044\") " Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.156105 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "88a90a35-c893-4857-9f8b-9a405c96c044" (UID: "88a90a35-c893-4857-9f8b-9a405c96c044"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.158242 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-logs" (OuterVolumeSpecName: "logs") pod "88a90a35-c893-4857-9f8b-9a405c96c044" (UID: "88a90a35-c893-4857-9f8b-9a405c96c044"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.160050 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "88a90a35-c893-4857-9f8b-9a405c96c044" (UID: "88a90a35-c893-4857-9f8b-9a405c96c044"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.161340 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-scripts" (OuterVolumeSpecName: "scripts") pod "88a90a35-c893-4857-9f8b-9a405c96c044" (UID: "88a90a35-c893-4857-9f8b-9a405c96c044"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.163027 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88a90a35-c893-4857-9f8b-9a405c96c044-kube-api-access-fhpwq" (OuterVolumeSpecName: "kube-api-access-fhpwq") pod "88a90a35-c893-4857-9f8b-9a405c96c044" (UID: "88a90a35-c893-4857-9f8b-9a405c96c044"). InnerVolumeSpecName "kube-api-access-fhpwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.191630 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88a90a35-c893-4857-9f8b-9a405c96c044" (UID: "88a90a35-c893-4857-9f8b-9a405c96c044"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.211221 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "88a90a35-c893-4857-9f8b-9a405c96c044" (UID: "88a90a35-c893-4857-9f8b-9a405c96c044"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.218018 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-config-data" (OuterVolumeSpecName: "config-data") pod "88a90a35-c893-4857-9f8b-9a405c96c044" (UID: "88a90a35-c893-4857-9f8b-9a405c96c044"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.258083 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhpwq\" (UniqueName: \"kubernetes.io/projected/88a90a35-c893-4857-9f8b-9a405c96c044-kube-api-access-fhpwq\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.258119 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.258129 5012 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.258140 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.258176 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.258185 5012 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/88a90a35-c893-4857-9f8b-9a405c96c044-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.258195 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.258206 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88a90a35-c893-4857-9f8b-9a405c96c044-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.277792 5012 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.360258 5012 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.693115 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"88a90a35-c893-4857-9f8b-9a405c96c044","Type":"ContainerDied","Data":"ecf3b5a274f4488d8ab70a5b1867720561b8843e1c2bc81491a836b7a8a78bb9"} Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.693211 5012 scope.go:117] "RemoveContainer" containerID="080c5b8e7193f737b62e579122d8ee996388ee8f88c0a41d0bb8c0f53c6cdc33" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.694286 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.696353 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb53e400-f5d7-4c86-9aab-eda61301a4cf","Type":"ContainerStarted","Data":"27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6"} Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.696400 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" containerName="glance-log" containerID="cri-o://84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759" gracePeriod=30 Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.696463 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" containerName="glance-httpd" containerID="cri-o://27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6" gracePeriod=30 Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.701104 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b","Type":"ContainerStarted","Data":"5011a2da1b6766de9dceb07b094e5e5b90457583e5b1d7f21e441d5bc980ef81"} Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.726389 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=39.72636314 podStartE2EDuration="39.72636314s" podCreationTimestamp="2026-02-19 05:42:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:33.720490333 +0000 UTC m=+1109.753812912" watchObservedRunningTime="2026-02-19 05:43:33.72636314 +0000 UTC m=+1109.759685709" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.738200 5012 scope.go:117] "RemoveContainer" containerID="ac6da210c5bb5413e246a1f52c04e83d2fc86480ea465052b466d052aefc08b7" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.745357 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.755134 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.787675 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:43:33 crc kubenswrapper[5012]: E0219 05:43:33.788348 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88a90a35-c893-4857-9f8b-9a405c96c044" containerName="glance-httpd" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.788361 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="88a90a35-c893-4857-9f8b-9a405c96c044" containerName="glance-httpd" Feb 19 05:43:33 crc kubenswrapper[5012]: E0219 05:43:33.788374 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88a90a35-c893-4857-9f8b-9a405c96c044" containerName="glance-log" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.788379 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="88a90a35-c893-4857-9f8b-9a405c96c044" containerName="glance-log" Feb 19 05:43:33 crc kubenswrapper[5012]: E0219 05:43:33.788395 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerName="init" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.788401 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerName="init" Feb 19 05:43:33 crc kubenswrapper[5012]: E0219 05:43:33.788416 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerName="dnsmasq-dns" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.788423 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerName="dnsmasq-dns" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.788580 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="88a90a35-c893-4857-9f8b-9a405c96c044" containerName="glance-httpd" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.788596 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="88a90a35-c893-4857-9f8b-9a405c96c044" containerName="glance-log" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.788604 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f22ec0c5-41a9-4f36-adb0-405e5a26d209" containerName="dnsmasq-dns" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.789432 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.792253 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.791991 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.864387 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.971957 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5tcx\" (UniqueName: \"kubernetes.io/projected/74c05972-714b-4cc7-97f6-d4a2c205eb08-kube-api-access-q5tcx\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.972100 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-config-data\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.972130 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-scripts\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.972291 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.972341 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.972358 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.972378 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-logs\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:33 crc kubenswrapper[5012]: I0219 05:43:33.972393 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.074247 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.074319 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.074341 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.074355 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-logs\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.074369 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.074458 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5tcx\" (UniqueName: \"kubernetes.io/projected/74c05972-714b-4cc7-97f6-d4a2c205eb08-kube-api-access-q5tcx\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.074499 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-config-data\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.074519 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-scripts\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.074826 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.075120 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.075275 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-logs\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.085119 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-scripts\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.093284 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.093286 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5tcx\" (UniqueName: \"kubernetes.io/projected/74c05972-714b-4cc7-97f6-d4a2c205eb08-kube-api-access-q5tcx\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.093614 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.096945 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-config-data\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.106750 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.409062 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.589381 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.693036 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-combined-ca-bundle\") pod \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.693172 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-httpd-run\") pod \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.693389 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-scripts\") pod \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.693427 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-logs\") pod \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.693459 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.693476 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-config-data\") pod \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.693510 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vglb8\" (UniqueName: \"kubernetes.io/projected/eb53e400-f5d7-4c86-9aab-eda61301a4cf-kube-api-access-vglb8\") pod \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.693631 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-internal-tls-certs\") pod \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\" (UID: \"eb53e400-f5d7-4c86-9aab-eda61301a4cf\") " Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.708908 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "eb53e400-f5d7-4c86-9aab-eda61301a4cf" (UID: "eb53e400-f5d7-4c86-9aab-eda61301a4cf"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.712220 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-logs" (OuterVolumeSpecName: "logs") pod "eb53e400-f5d7-4c86-9aab-eda61301a4cf" (UID: "eb53e400-f5d7-4c86-9aab-eda61301a4cf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.724909 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "eb53e400-f5d7-4c86-9aab-eda61301a4cf" (UID: "eb53e400-f5d7-4c86-9aab-eda61301a4cf"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.725939 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb53e400-f5d7-4c86-9aab-eda61301a4cf-kube-api-access-vglb8" (OuterVolumeSpecName: "kube-api-access-vglb8") pod "eb53e400-f5d7-4c86-9aab-eda61301a4cf" (UID: "eb53e400-f5d7-4c86-9aab-eda61301a4cf"). InnerVolumeSpecName "kube-api-access-vglb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.737808 5012 generic.go:334] "Generic (PLEG): container finished" podID="be803869-4625-418d-bd39-bdbb4e6e0bfd" containerID="8659190e8633f7b88664c6c7e44927faf89d76ab66a53b4530e433a52d8c9664" exitCode=0 Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.747238 5012 generic.go:334] "Generic (PLEG): container finished" podID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" containerID="27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6" exitCode=0 Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.747270 5012 generic.go:334] "Generic (PLEG): container finished" podID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" containerID="84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759" exitCode=143 Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.747387 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.755822 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-scripts" (OuterVolumeSpecName: "scripts") pod "eb53e400-f5d7-4c86-9aab-eda61301a4cf" (UID: "eb53e400-f5d7-4c86-9aab-eda61301a4cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.757877 5012 generic.go:334] "Generic (PLEG): container finished" podID="555a6373-5cdf-490e-b6ea-b0fb55425d28" containerID="9a5f9edac057b3de1965c26aac0927e9eaced35943e1b07d9b0176cc162f7fc5" exitCode=0 Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.762446 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88a90a35-c893-4857-9f8b-9a405c96c044" path="/var/lib/kubelet/pods/88a90a35-c893-4857-9f8b-9a405c96c044/volumes" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.771551 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-config-data" (OuterVolumeSpecName: "config-data") pod "eb53e400-f5d7-4c86-9aab-eda61301a4cf" (UID: "eb53e400-f5d7-4c86-9aab-eda61301a4cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.772548 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb53e400-f5d7-4c86-9aab-eda61301a4cf" (UID: "eb53e400-f5d7-4c86-9aab-eda61301a4cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.791438 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "eb53e400-f5d7-4c86-9aab-eda61301a4cf" (UID: "eb53e400-f5d7-4c86-9aab-eda61301a4cf"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.798004 5012 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.798037 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.798051 5012 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.798062 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.798072 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb53e400-f5d7-4c86-9aab-eda61301a4cf-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.798114 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.798125 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb53e400-f5d7-4c86-9aab-eda61301a4cf-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.798136 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vglb8\" (UniqueName: \"kubernetes.io/projected/eb53e400-f5d7-4c86-9aab-eda61301a4cf-kube-api-access-vglb8\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.821742 5012 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.823227 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w9g6v" event={"ID":"be803869-4625-418d-bd39-bdbb4e6e0bfd","Type":"ContainerDied","Data":"8659190e8633f7b88664c6c7e44927faf89d76ab66a53b4530e433a52d8c9664"} Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.823277 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb53e400-f5d7-4c86-9aab-eda61301a4cf","Type":"ContainerDied","Data":"27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6"} Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.823313 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb53e400-f5d7-4c86-9aab-eda61301a4cf","Type":"ContainerDied","Data":"84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759"} Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.823328 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb53e400-f5d7-4c86-9aab-eda61301a4cf","Type":"ContainerDied","Data":"ed8c2a32d5ff07698cb91058f40ee14be5a75fe90e647b1bf825aa951923980d"} Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.823340 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zf89d" event={"ID":"555a6373-5cdf-490e-b6ea-b0fb55425d28","Type":"ContainerDied","Data":"9a5f9edac057b3de1965c26aac0927e9eaced35943e1b07d9b0176cc162f7fc5"} Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.823503 5012 scope.go:117] "RemoveContainer" containerID="27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.856039 5012 scope.go:117] "RemoveContainer" containerID="84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.889265 5012 scope.go:117] "RemoveContainer" containerID="27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6" Feb 19 05:43:34 crc kubenswrapper[5012]: E0219 05:43:34.889850 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6\": container with ID starting with 27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6 not found: ID does not exist" containerID="27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.889896 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6"} err="failed to get container status \"27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6\": rpc error: code = NotFound desc = could not find container \"27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6\": container with ID starting with 27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6 not found: ID does not exist" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.889923 5012 scope.go:117] "RemoveContainer" containerID="84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759" Feb 19 05:43:34 crc kubenswrapper[5012]: E0219 05:43:34.890128 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759\": container with ID starting with 84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759 not found: ID does not exist" containerID="84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.890149 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759"} err="failed to get container status \"84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759\": rpc error: code = NotFound desc = could not find container \"84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759\": container with ID starting with 84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759 not found: ID does not exist" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.890165 5012 scope.go:117] "RemoveContainer" containerID="27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.890389 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6"} err="failed to get container status \"27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6\": rpc error: code = NotFound desc = could not find container \"27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6\": container with ID starting with 27214ed26559947bbee9ad288c03ee52151f840e5735db2801d064575cdcb0b6 not found: ID does not exist" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.890412 5012 scope.go:117] "RemoveContainer" containerID="84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.890576 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759"} err="failed to get container status \"84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759\": rpc error: code = NotFound desc = could not find container \"84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759\": container with ID starting with 84f1e9081dbf4845ce2fb0a27e8c8cd84295d131b01a065006cd01af9f833759 not found: ID does not exist" Feb 19 05:43:34 crc kubenswrapper[5012]: I0219 05:43:34.906994 5012 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.059158 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.088521 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.097809 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.113646 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:43:35 crc kubenswrapper[5012]: E0219 05:43:35.114087 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" containerName="glance-log" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.114105 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" containerName="glance-log" Feb 19 05:43:35 crc kubenswrapper[5012]: E0219 05:43:35.114141 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" containerName="glance-httpd" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.114148 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" containerName="glance-httpd" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.114394 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" containerName="glance-httpd" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.114418 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" containerName="glance-log" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.115340 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.124839 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.125025 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.127956 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.221775 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.221820 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.221889 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.221910 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.221988 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.222024 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.222052 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d4wq\" (UniqueName: \"kubernetes.io/projected/50127c6b-476e-473a-877d-00fd5feb6bb4-kube-api-access-2d4wq\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.222073 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-logs\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.325155 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.325205 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.325288 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.325321 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.325357 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d4wq\" (UniqueName: \"kubernetes.io/projected/50127c6b-476e-473a-877d-00fd5feb6bb4-kube-api-access-2d4wq\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.325379 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-logs\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.325398 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.325423 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.326186 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.326636 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-logs\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.328187 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.331838 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.331919 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.334231 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.335099 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.348239 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d4wq\" (UniqueName: \"kubernetes.io/projected/50127c6b-476e-473a-877d-00fd5feb6bb4-kube-api-access-2d4wq\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.386125 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.470361 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:35 crc kubenswrapper[5012]: I0219 05:43:35.799928 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"74c05972-714b-4cc7-97f6-d4a2c205eb08","Type":"ContainerStarted","Data":"884f09cbda393c2ecb1a2ab4bc0243e004e662fb5c7beaf39c14a2c689ed4fc6"} Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.006869 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.116762 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w9g6v" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.198956 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.256718 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-scripts\") pod \"be803869-4625-418d-bd39-bdbb4e6e0bfd\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.257232 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-config-data\") pod \"be803869-4625-418d-bd39-bdbb4e6e0bfd\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.257264 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be803869-4625-418d-bd39-bdbb4e6e0bfd-logs\") pod \"be803869-4625-418d-bd39-bdbb4e6e0bfd\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.257440 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rnk9\" (UniqueName: \"kubernetes.io/projected/be803869-4625-418d-bd39-bdbb4e6e0bfd-kube-api-access-6rnk9\") pod \"be803869-4625-418d-bd39-bdbb4e6e0bfd\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.257552 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-combined-ca-bundle\") pod \"be803869-4625-418d-bd39-bdbb4e6e0bfd\" (UID: \"be803869-4625-418d-bd39-bdbb4e6e0bfd\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.257766 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be803869-4625-418d-bd39-bdbb4e6e0bfd-logs" (OuterVolumeSpecName: "logs") pod "be803869-4625-418d-bd39-bdbb4e6e0bfd" (UID: "be803869-4625-418d-bd39-bdbb4e6e0bfd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.258222 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be803869-4625-418d-bd39-bdbb4e6e0bfd-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.268470 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be803869-4625-418d-bd39-bdbb4e6e0bfd-kube-api-access-6rnk9" (OuterVolumeSpecName: "kube-api-access-6rnk9") pod "be803869-4625-418d-bd39-bdbb4e6e0bfd" (UID: "be803869-4625-418d-bd39-bdbb4e6e0bfd"). InnerVolumeSpecName "kube-api-access-6rnk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.268567 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-scripts" (OuterVolumeSpecName: "scripts") pod "be803869-4625-418d-bd39-bdbb4e6e0bfd" (UID: "be803869-4625-418d-bd39-bdbb4e6e0bfd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.295396 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be803869-4625-418d-bd39-bdbb4e6e0bfd" (UID: "be803869-4625-418d-bd39-bdbb4e6e0bfd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.296455 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-config-data" (OuterVolumeSpecName: "config-data") pod "be803869-4625-418d-bd39-bdbb4e6e0bfd" (UID: "be803869-4625-418d-bd39-bdbb4e6e0bfd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.309843 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.311075 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.359680 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-scripts\") pod \"555a6373-5cdf-490e-b6ea-b0fb55425d28\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.360103 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-fernet-keys\") pod \"555a6373-5cdf-490e-b6ea-b0fb55425d28\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.360136 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-credential-keys\") pod \"555a6373-5cdf-490e-b6ea-b0fb55425d28\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.360219 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nknw\" (UniqueName: \"kubernetes.io/projected/555a6373-5cdf-490e-b6ea-b0fb55425d28-kube-api-access-2nknw\") pod \"555a6373-5cdf-490e-b6ea-b0fb55425d28\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.360248 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-combined-ca-bundle\") pod \"555a6373-5cdf-490e-b6ea-b0fb55425d28\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.360291 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-config-data\") pod \"555a6373-5cdf-490e-b6ea-b0fb55425d28\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.360701 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.360716 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.360727 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rnk9\" (UniqueName: \"kubernetes.io/projected/be803869-4625-418d-bd39-bdbb4e6e0bfd-kube-api-access-6rnk9\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.360738 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be803869-4625-418d-bd39-bdbb4e6e0bfd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.373819 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/555a6373-5cdf-490e-b6ea-b0fb55425d28-kube-api-access-2nknw" (OuterVolumeSpecName: "kube-api-access-2nknw") pod "555a6373-5cdf-490e-b6ea-b0fb55425d28" (UID: "555a6373-5cdf-490e-b6ea-b0fb55425d28"). InnerVolumeSpecName "kube-api-access-2nknw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.374748 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "555a6373-5cdf-490e-b6ea-b0fb55425d28" (UID: "555a6373-5cdf-490e-b6ea-b0fb55425d28"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.374904 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-scripts" (OuterVolumeSpecName: "scripts") pod "555a6373-5cdf-490e-b6ea-b0fb55425d28" (UID: "555a6373-5cdf-490e-b6ea-b0fb55425d28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.376347 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "555a6373-5cdf-490e-b6ea-b0fb55425d28" (UID: "555a6373-5cdf-490e-b6ea-b0fb55425d28"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:36 crc kubenswrapper[5012]: E0219 05:43:36.391047 5012 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-combined-ca-bundle podName:555a6373-5cdf-490e-b6ea-b0fb55425d28 nodeName:}" failed. No retries permitted until 2026-02-19 05:43:36.891017063 +0000 UTC m=+1112.924339632 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-combined-ca-bundle") pod "555a6373-5cdf-490e-b6ea-b0fb55425d28" (UID: "555a6373-5cdf-490e-b6ea-b0fb55425d28") : error deleting /var/lib/kubelet/pods/555a6373-5cdf-490e-b6ea-b0fb55425d28/volume-subpaths: remove /var/lib/kubelet/pods/555a6373-5cdf-490e-b6ea-b0fb55425d28/volume-subpaths: no such file or directory Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.394293 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-config-data" (OuterVolumeSpecName: "config-data") pod "555a6373-5cdf-490e-b6ea-b0fb55425d28" (UID: "555a6373-5cdf-490e-b6ea-b0fb55425d28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.462283 5012 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.462363 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nknw\" (UniqueName: \"kubernetes.io/projected/555a6373-5cdf-490e-b6ea-b0fb55425d28-kube-api-access-2nknw\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.462374 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.462384 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.462394 5012 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.724590 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb53e400-f5d7-4c86-9aab-eda61301a4cf" path="/var/lib/kubelet/pods/eb53e400-f5d7-4c86-9aab-eda61301a4cf/volumes" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.826705 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w9g6v" event={"ID":"be803869-4625-418d-bd39-bdbb4e6e0bfd","Type":"ContainerDied","Data":"1bf63392d872a713c6fdde27be345aac65be8a37d2e0427ef52052d66a795c4c"} Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.826791 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bf63392d872a713c6fdde27be345aac65be8a37d2e0427ef52052d66a795c4c" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.826786 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w9g6v" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.839225 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"50127c6b-476e-473a-877d-00fd5feb6bb4","Type":"ContainerStarted","Data":"a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5"} Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.839271 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"50127c6b-476e-473a-877d-00fd5feb6bb4","Type":"ContainerStarted","Data":"9f6241f52b36b9304734fa39b59f3e6db469ba06ede3efe69c7f2c281f65bc4e"} Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.845939 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zf89d" event={"ID":"555a6373-5cdf-490e-b6ea-b0fb55425d28","Type":"ContainerDied","Data":"6d075631935c3f811666c0cc2948951facdd256267e93e94fef708b6510322b7"} Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.845977 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d075631935c3f811666c0cc2948951facdd256267e93e94fef708b6510322b7" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.846048 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zf89d" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.862556 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"74c05972-714b-4cc7-97f6-d4a2c205eb08","Type":"ContainerStarted","Data":"bc599b5c1fd3d067ccfdc4bf4a2aeefedf9b008aba3555832c150c823f5147fd"} Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.862611 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"74c05972-714b-4cc7-97f6-d4a2c205eb08","Type":"ContainerStarted","Data":"c0addf6cc4fd08d20a94ea77846955a7e25ecc21b7ac41291cb427ac997c6c7a"} Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.899342 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.8993231059999998 podStartE2EDuration="3.899323106s" podCreationTimestamp="2026-02-19 05:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:36.884583156 +0000 UTC m=+1112.917905725" watchObservedRunningTime="2026-02-19 05:43:36.899323106 +0000 UTC m=+1112.932645675" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.947192 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7b574779c9-x2bsv"] Feb 19 05:43:36 crc kubenswrapper[5012]: E0219 05:43:36.949534 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555a6373-5cdf-490e-b6ea-b0fb55425d28" containerName="keystone-bootstrap" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.949559 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="555a6373-5cdf-490e-b6ea-b0fb55425d28" containerName="keystone-bootstrap" Feb 19 05:43:36 crc kubenswrapper[5012]: E0219 05:43:36.949597 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be803869-4625-418d-bd39-bdbb4e6e0bfd" containerName="placement-db-sync" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.949605 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="be803869-4625-418d-bd39-bdbb4e6e0bfd" containerName="placement-db-sync" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.949812 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="be803869-4625-418d-bd39-bdbb4e6e0bfd" containerName="placement-db-sync" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.949835 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="555a6373-5cdf-490e-b6ea-b0fb55425d28" containerName="keystone-bootstrap" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.950674 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.954656 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.954939 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.968932 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7b574779c9-x2bsv"] Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.971641 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-combined-ca-bundle\") pod \"555a6373-5cdf-490e-b6ea-b0fb55425d28\" (UID: \"555a6373-5cdf-490e-b6ea-b0fb55425d28\") " Feb 19 05:43:36 crc kubenswrapper[5012]: I0219 05:43:36.999595 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "555a6373-5cdf-490e-b6ea-b0fb55425d28" (UID: "555a6373-5cdf-490e-b6ea-b0fb55425d28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.069444 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5c6b5c5b7b-9nnqj"] Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.071746 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.073974 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-fernet-keys\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.074026 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-public-tls-certs\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.074085 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-internal-tls-certs\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.074117 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-credential-keys\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.074141 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpffc\" (UniqueName: \"kubernetes.io/projected/0e0a6a9f-d11f-4084-9742-7780b20fae75-kube-api-access-gpffc\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.074167 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-combined-ca-bundle\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.074182 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-config-data\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.074214 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-scripts\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.074328 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/555a6373-5cdf-490e-b6ea-b0fb55425d28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.078587 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.078792 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.078838 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dzmq8" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.078800 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.087827 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.099393 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c6b5c5b7b-9nnqj"] Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.175910 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-combined-ca-bundle\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.175970 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-public-tls-certs\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176063 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-scripts\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176120 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-fernet-keys\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176146 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-public-tls-certs\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176171 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-internal-tls-certs\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176224 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-internal-tls-certs\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176246 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-config-data\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176296 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-credential-keys\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176346 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpffc\" (UniqueName: \"kubernetes.io/projected/0e0a6a9f-d11f-4084-9742-7780b20fae75-kube-api-access-gpffc\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176372 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-combined-ca-bundle\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176407 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-config-data\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176426 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d214ce94-6c65-4641-a1e2-21f5f920ecec-logs\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176452 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-scripts\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.176498 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7kbf\" (UniqueName: \"kubernetes.io/projected/d214ce94-6c65-4641-a1e2-21f5f920ecec-kube-api-access-s7kbf\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.180456 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-fernet-keys\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.180544 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-scripts\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.180728 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-internal-tls-certs\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.184653 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-public-tls-certs\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.187828 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-combined-ca-bundle\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.202157 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-config-data\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.204804 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0e0a6a9f-d11f-4084-9742-7780b20fae75-credential-keys\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.211947 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpffc\" (UniqueName: \"kubernetes.io/projected/0e0a6a9f-d11f-4084-9742-7780b20fae75-kube-api-access-gpffc\") pod \"keystone-7b574779c9-x2bsv\" (UID: \"0e0a6a9f-d11f-4084-9742-7780b20fae75\") " pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.252269 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6f94997dd8-cvnfv"] Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.253692 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.271195 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f94997dd8-cvnfv"] Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.272021 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.278027 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-internal-tls-certs\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.278092 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-config-data\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.278135 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d214ce94-6c65-4641-a1e2-21f5f920ecec-logs\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.278172 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7kbf\" (UniqueName: \"kubernetes.io/projected/d214ce94-6c65-4641-a1e2-21f5f920ecec-kube-api-access-s7kbf\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.278214 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-combined-ca-bundle\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.278228 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-public-tls-certs\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.278271 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-scripts\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.279430 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d214ce94-6c65-4641-a1e2-21f5f920ecec-logs\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.288320 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-scripts\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.289003 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-combined-ca-bundle\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.290239 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-public-tls-certs\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.294041 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-config-data\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.294984 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-internal-tls-certs\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.298531 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7kbf\" (UniqueName: \"kubernetes.io/projected/d214ce94-6c65-4641-a1e2-21f5f920ecec-kube-api-access-s7kbf\") pod \"placement-5c6b5c5b7b-9nnqj\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.380879 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-scripts\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.383660 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-logs\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.383701 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-combined-ca-bundle\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.383774 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkk4z\" (UniqueName: \"kubernetes.io/projected/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-kube-api-access-zkk4z\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.383905 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-public-tls-certs\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.383935 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-config-data\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.383979 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-internal-tls-certs\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.403020 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.486620 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-internal-tls-certs\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.486771 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-scripts\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.486810 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-logs\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.486834 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-combined-ca-bundle\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.486901 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkk4z\" (UniqueName: \"kubernetes.io/projected/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-kube-api-access-zkk4z\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.487028 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-public-tls-certs\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.487064 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-config-data\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.488059 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-logs\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.532838 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-scripts\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.533103 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-internal-tls-certs\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.534924 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-config-data\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.535703 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-combined-ca-bundle\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.538325 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkk4z\" (UniqueName: \"kubernetes.io/projected/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-kube-api-access-zkk4z\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.539050 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ce1e0a-4e51-408c-b3f8-500cf6476b96-public-tls-certs\") pod \"placement-6f94997dd8-cvnfv\" (UID: \"b0ce1e0a-4e51-408c-b3f8-500cf6476b96\") " pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:37 crc kubenswrapper[5012]: I0219 05:43:37.614088 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.149473 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c6b5c5b7b-9nnqj"] Feb 19 05:43:41 crc kubenswrapper[5012]: W0219 05:43:41.168201 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd214ce94_6c65_4641_a1e2_21f5f920ecec.slice/crio-020e2e77d5547a74ce74ede9f57616121d05cdbb046cf4e2e88cca4fa12f2d3b WatchSource:0}: Error finding container 020e2e77d5547a74ce74ede9f57616121d05cdbb046cf4e2e88cca4fa12f2d3b: Status 404 returned error can't find the container with id 020e2e77d5547a74ce74ede9f57616121d05cdbb046cf4e2e88cca4fa12f2d3b Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.284643 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f94997dd8-cvnfv"] Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.306459 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7b574779c9-x2bsv"] Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.930405 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f94997dd8-cvnfv" event={"ID":"b0ce1e0a-4e51-408c-b3f8-500cf6476b96","Type":"ContainerStarted","Data":"ed50fcc4b5644658131f17e470a2e14ab8224f5d90567660d7376f21a9bc839f"} Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.930752 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f94997dd8-cvnfv" event={"ID":"b0ce1e0a-4e51-408c-b3f8-500cf6476b96","Type":"ContainerStarted","Data":"15fb945b89b8b44897961cb004e8122151ce64b02699d11a5abc38fcf5252b14"} Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.932056 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c6b5c5b7b-9nnqj" event={"ID":"d214ce94-6c65-4641-a1e2-21f5f920ecec","Type":"ContainerStarted","Data":"f9417f3089ab939acabaf087bdedc14bb6991a7978946e02fec09196a1d9ec1c"} Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.932080 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c6b5c5b7b-9nnqj" event={"ID":"d214ce94-6c65-4641-a1e2-21f5f920ecec","Type":"ContainerStarted","Data":"020e2e77d5547a74ce74ede9f57616121d05cdbb046cf4e2e88cca4fa12f2d3b"} Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.941946 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"50127c6b-476e-473a-877d-00fd5feb6bb4","Type":"ContainerStarted","Data":"4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46"} Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.952263 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-cdj57" event={"ID":"89f14c4e-147e-4a05-a8d9-63b93aaad4a4","Type":"ContainerStarted","Data":"d0e335ec457cf8c772f55111337cf2d1aae49da15e75b237650c2e4a19efd926"} Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.974784 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b","Type":"ContainerStarted","Data":"bdf4b7c244764dd2879106070ed07ec4228686361067f77e4b0e731b44af052c"} Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.976237 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b574779c9-x2bsv" event={"ID":"0e0a6a9f-d11f-4084-9742-7780b20fae75","Type":"ContainerStarted","Data":"e8deb7af9035826b7190a2427036f8bc6012ecf3eda7fd34c5ffb11b5eb4f2b4"} Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.976263 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b574779c9-x2bsv" event={"ID":"0e0a6a9f-d11f-4084-9742-7780b20fae75","Type":"ContainerStarted","Data":"5ef30ae4559150a8057de9dfba533693fc362a504fa851ffbcebc2172b2e05c0"} Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.977220 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.979952 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jzclm" event={"ID":"a34a979c-9102-471f-9678-048fd5198cb8","Type":"ContainerStarted","Data":"8322bcc6cc3c5b2d8222ae8137e7a8ab0b73bac7b8fa9b87cd91c71100844e13"} Feb 19 05:43:41 crc kubenswrapper[5012]: I0219 05:43:41.996515 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.996477795 podStartE2EDuration="6.996477795s" podCreationTimestamp="2026-02-19 05:43:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:41.960802259 +0000 UTC m=+1117.994124838" watchObservedRunningTime="2026-02-19 05:43:41.996477795 +0000 UTC m=+1118.029800364" Feb 19 05:43:42 crc kubenswrapper[5012]: I0219 05:43:42.012273 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-cdj57" podStartSLOduration=2.9744668819999998 podStartE2EDuration="51.012240352s" podCreationTimestamp="2026-02-19 05:42:51 +0000 UTC" firstStartedPulling="2026-02-19 05:42:52.988901024 +0000 UTC m=+1069.022223593" lastFinishedPulling="2026-02-19 05:43:41.026674494 +0000 UTC m=+1117.059997063" observedRunningTime="2026-02-19 05:43:41.987431698 +0000 UTC m=+1118.020754267" watchObservedRunningTime="2026-02-19 05:43:42.012240352 +0000 UTC m=+1118.045562921" Feb 19 05:43:42 crc kubenswrapper[5012]: I0219 05:43:42.079715 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-jzclm" podStartSLOduration=3.03960023 podStartE2EDuration="55.079695587s" podCreationTimestamp="2026-02-19 05:42:47 +0000 UTC" firstStartedPulling="2026-02-19 05:42:48.693347859 +0000 UTC m=+1064.726670428" lastFinishedPulling="2026-02-19 05:43:40.733443216 +0000 UTC m=+1116.766765785" observedRunningTime="2026-02-19 05:43:42.009244106 +0000 UTC m=+1118.042566675" watchObservedRunningTime="2026-02-19 05:43:42.079695587 +0000 UTC m=+1118.113018156" Feb 19 05:43:42 crc kubenswrapper[5012]: I0219 05:43:42.090371 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7b574779c9-x2bsv" podStartSLOduration=6.090355915 podStartE2EDuration="6.090355915s" podCreationTimestamp="2026-02-19 05:43:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:42.044627495 +0000 UTC m=+1118.077950064" watchObservedRunningTime="2026-02-19 05:43:42.090355915 +0000 UTC m=+1118.123678484" Feb 19 05:43:42 crc kubenswrapper[5012]: I0219 05:43:42.990538 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f94997dd8-cvnfv" event={"ID":"b0ce1e0a-4e51-408c-b3f8-500cf6476b96","Type":"ContainerStarted","Data":"1c337d580d168324833674cbb90b7f734c2fcb4515bb4135a581a165db4bb401"} Feb 19 05:43:42 crc kubenswrapper[5012]: I0219 05:43:42.990774 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:42 crc kubenswrapper[5012]: I0219 05:43:42.990998 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:43:42 crc kubenswrapper[5012]: I0219 05:43:42.994472 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c6b5c5b7b-9nnqj" event={"ID":"d214ce94-6c65-4641-a1e2-21f5f920ecec","Type":"ContainerStarted","Data":"1bf5d73af424c2f421bc54586605dbed2a0980894768360700238dc093ac82ff"} Feb 19 05:43:42 crc kubenswrapper[5012]: I0219 05:43:42.994631 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:42 crc kubenswrapper[5012]: I0219 05:43:42.997525 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xj7dw" event={"ID":"b98c972c-b350-44a1-a7c5-028914fe7bfc","Type":"ContainerStarted","Data":"8dfd0224f4b707b6bfc0133d1f07ea378c585adcdbe5ef8ea62dd0f00fb98923"} Feb 19 05:43:43 crc kubenswrapper[5012]: I0219 05:43:43.026604 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6f94997dd8-cvnfv" podStartSLOduration=6.026583382 podStartE2EDuration="6.026583382s" podCreationTimestamp="2026-02-19 05:43:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:43.01855846 +0000 UTC m=+1119.051881029" watchObservedRunningTime="2026-02-19 05:43:43.026583382 +0000 UTC m=+1119.059905951" Feb 19 05:43:43 crc kubenswrapper[5012]: I0219 05:43:43.049469 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5c6b5c5b7b-9nnqj" podStartSLOduration=6.049452127 podStartE2EDuration="6.049452127s" podCreationTimestamp="2026-02-19 05:43:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:43.045971059 +0000 UTC m=+1119.079293628" watchObservedRunningTime="2026-02-19 05:43:43.049452127 +0000 UTC m=+1119.082774696" Feb 19 05:43:43 crc kubenswrapper[5012]: I0219 05:43:43.073695 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-xj7dw" podStartSLOduration=4.143648422 podStartE2EDuration="52.073677265s" podCreationTimestamp="2026-02-19 05:42:51 +0000 UTC" firstStartedPulling="2026-02-19 05:42:53.089380579 +0000 UTC m=+1069.122703138" lastFinishedPulling="2026-02-19 05:43:41.019409412 +0000 UTC m=+1117.052731981" observedRunningTime="2026-02-19 05:43:43.070569787 +0000 UTC m=+1119.103892346" watchObservedRunningTime="2026-02-19 05:43:43.073677265 +0000 UTC m=+1119.106999824" Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.007369 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.409572 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.409940 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.430276 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.430343 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.430382 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.430919 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0209690f43a6b6283a91e933f5b897e5259f5fced0261c8b5238e804ce206915"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.430980 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://0209690f43a6b6283a91e933f5b897e5259f5fced0261c8b5238e804ce206915" gracePeriod=600 Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.456060 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.469506 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 05:43:44 crc kubenswrapper[5012]: I0219 05:43:44.565446 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:43:45 crc kubenswrapper[5012]: I0219 05:43:45.026580 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="0209690f43a6b6283a91e933f5b897e5259f5fced0261c8b5238e804ce206915" exitCode=0 Feb 19 05:43:45 crc kubenswrapper[5012]: I0219 05:43:45.026796 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"0209690f43a6b6283a91e933f5b897e5259f5fced0261c8b5238e804ce206915"} Feb 19 05:43:45 crc kubenswrapper[5012]: I0219 05:43:45.028031 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"6721017012e745bfd497807b3e0766cbf7c779446215cbbe94491f729f86c6ac"} Feb 19 05:43:45 crc kubenswrapper[5012]: I0219 05:43:45.028052 5012 scope.go:117] "RemoveContainer" containerID="f6b4f2485162f8c24d6693d845318234656e6a8c97d49d2e72f4427654fa319a" Feb 19 05:43:45 crc kubenswrapper[5012]: I0219 05:43:45.028870 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 05:43:45 crc kubenswrapper[5012]: I0219 05:43:45.028913 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 05:43:45 crc kubenswrapper[5012]: I0219 05:43:45.471980 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:45 crc kubenswrapper[5012]: I0219 05:43:45.472411 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:45 crc kubenswrapper[5012]: I0219 05:43:45.510784 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:45 crc kubenswrapper[5012]: I0219 05:43:45.521443 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:46 crc kubenswrapper[5012]: I0219 05:43:46.048018 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:46 crc kubenswrapper[5012]: I0219 05:43:46.048071 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:46 crc kubenswrapper[5012]: I0219 05:43:46.370844 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:43:47 crc kubenswrapper[5012]: I0219 05:43:47.057957 5012 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 05:43:47 crc kubenswrapper[5012]: I0219 05:43:47.058465 5012 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 05:43:47 crc kubenswrapper[5012]: I0219 05:43:47.187976 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 05:43:47 crc kubenswrapper[5012]: I0219 05:43:47.188540 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 05:43:48 crc kubenswrapper[5012]: I0219 05:43:48.152754 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:48 crc kubenswrapper[5012]: I0219 05:43:48.153108 5012 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 05:43:48 crc kubenswrapper[5012]: I0219 05:43:48.154651 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 05:43:48 crc kubenswrapper[5012]: I0219 05:43:48.162387 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:43:49 crc kubenswrapper[5012]: I0219 05:43:49.080153 5012 generic.go:334] "Generic (PLEG): container finished" podID="89f14c4e-147e-4a05-a8d9-63b93aaad4a4" containerID="d0e335ec457cf8c772f55111337cf2d1aae49da15e75b237650c2e4a19efd926" exitCode=0 Feb 19 05:43:49 crc kubenswrapper[5012]: I0219 05:43:49.080212 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-cdj57" event={"ID":"89f14c4e-147e-4a05-a8d9-63b93aaad4a4","Type":"ContainerDied","Data":"d0e335ec457cf8c772f55111337cf2d1aae49da15e75b237650c2e4a19efd926"} Feb 19 05:43:49 crc kubenswrapper[5012]: I0219 05:43:49.928901 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6cdcb467fb-8tvnz" Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.003748 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75cc7d9585-x8r8l"] Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.006280 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75cc7d9585-x8r8l" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon-log" containerID="cri-o://3fcdc6a7de1157e87df26c6381be0f82492f8c4422bc5e6ab2f42667c4a696ee" gracePeriod=30 Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.006594 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75cc7d9585-x8r8l" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon" containerID="cri-o://55079917653f6fec11a6880998a2eb1b86a3b903487d3ecb0aa13cd966d7990e" gracePeriod=30 Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.089754 5012 generic.go:334] "Generic (PLEG): container finished" podID="a34a979c-9102-471f-9678-048fd5198cb8" containerID="8322bcc6cc3c5b2d8222ae8137e7a8ab0b73bac7b8fa9b87cd91c71100844e13" exitCode=0 Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.089846 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jzclm" event={"ID":"a34a979c-9102-471f-9678-048fd5198cb8","Type":"ContainerDied","Data":"8322bcc6cc3c5b2d8222ae8137e7a8ab0b73bac7b8fa9b87cd91c71100844e13"} Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.755431 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-cdj57" Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.792630 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq87n\" (UniqueName: \"kubernetes.io/projected/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-kube-api-access-fq87n\") pod \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.792741 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-combined-ca-bundle\") pod \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.792819 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-db-sync-config-data\") pod \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.792877 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-config-data\") pod \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\" (UID: \"89f14c4e-147e-4a05-a8d9-63b93aaad4a4\") " Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.801093 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "89f14c4e-147e-4a05-a8d9-63b93aaad4a4" (UID: "89f14c4e-147e-4a05-a8d9-63b93aaad4a4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.825511 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89f14c4e-147e-4a05-a8d9-63b93aaad4a4" (UID: "89f14c4e-147e-4a05-a8d9-63b93aaad4a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.833466 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-kube-api-access-fq87n" (OuterVolumeSpecName: "kube-api-access-fq87n") pod "89f14c4e-147e-4a05-a8d9-63b93aaad4a4" (UID: "89f14c4e-147e-4a05-a8d9-63b93aaad4a4"). InnerVolumeSpecName "kube-api-access-fq87n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.872852 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-config-data" (OuterVolumeSpecName: "config-data") pod "89f14c4e-147e-4a05-a8d9-63b93aaad4a4" (UID: "89f14c4e-147e-4a05-a8d9-63b93aaad4a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.898665 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq87n\" (UniqueName: \"kubernetes.io/projected/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-kube-api-access-fq87n\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.899420 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.899554 5012 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:50 crc kubenswrapper[5012]: I0219 05:43:50.899649 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89f14c4e-147e-4a05-a8d9-63b93aaad4a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.108687 5012 generic.go:334] "Generic (PLEG): container finished" podID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerID="55079917653f6fec11a6880998a2eb1b86a3b903487d3ecb0aa13cd966d7990e" exitCode=0 Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.108768 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75cc7d9585-x8r8l" event={"ID":"7c163961-185c-418b-a0f5-a4d55b59f3ec","Type":"ContainerDied","Data":"55079917653f6fec11a6880998a2eb1b86a3b903487d3ecb0aa13cd966d7990e"} Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.113098 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-cdj57" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.113133 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-cdj57" event={"ID":"89f14c4e-147e-4a05-a8d9-63b93aaad4a4","Type":"ContainerDied","Data":"416ab3f77df9ec5ea4c8eb669473dd003dd38b711f8cc41ad525b40979b07e19"} Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.113152 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="416ab3f77df9ec5ea4c8eb669473dd003dd38b711f8cc41ad525b40979b07e19" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.425897 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 19 05:43:51 crc kubenswrapper[5012]: E0219 05:43:51.426765 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f14c4e-147e-4a05-a8d9-63b93aaad4a4" containerName="watcher-db-sync" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.426782 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f14c4e-147e-4a05-a8d9-63b93aaad4a4" containerName="watcher-db-sync" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.428863 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f14c4e-147e-4a05-a8d9-63b93aaad4a4" containerName="watcher-db-sync" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.434011 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.442934 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-6chdl" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.445923 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.446093 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.447444 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.450659 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.460245 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jzclm" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.482097 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Feb 19 05:43:51 crc kubenswrapper[5012]: E0219 05:43:51.482528 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a34a979c-9102-471f-9678-048fd5198cb8" containerName="barbican-db-sync" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.482544 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a34a979c-9102-471f-9678-048fd5198cb8" containerName="barbican-db-sync" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.482738 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a34a979c-9102-471f-9678-048fd5198cb8" containerName="barbican-db-sync" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.483338 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.492431 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.497154 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.502557 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.512985 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z95mm\" (UniqueName: \"kubernetes.io/projected/a34a979c-9102-471f-9678-048fd5198cb8-kube-api-access-z95mm\") pod \"a34a979c-9102-471f-9678-048fd5198cb8\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513189 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-db-sync-config-data\") pod \"a34a979c-9102-471f-9678-048fd5198cb8\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513232 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-combined-ca-bundle\") pod \"a34a979c-9102-471f-9678-048fd5198cb8\" (UID: \"a34a979c-9102-471f-9678-048fd5198cb8\") " Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513456 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513487 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4778529-f7d0-482b-bd67-003aaa7ca0ae-config-data\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513509 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5wsl\" (UniqueName: \"kubernetes.io/projected/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-kube-api-access-m5wsl\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513536 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513561 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpx9g\" (UniqueName: \"kubernetes.io/projected/7fdaa495-6cde-409a-871a-e334ca3f2a91-kube-api-access-cpx9g\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513625 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-config-data\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513659 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k94vz\" (UniqueName: \"kubernetes.io/projected/d4778529-f7d0-482b-bd67-003aaa7ca0ae-kube-api-access-k94vz\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513686 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513705 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fdaa495-6cde-409a-871a-e334ca3f2a91-logs\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513741 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4778529-f7d0-482b-bd67-003aaa7ca0ae-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513768 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513792 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-config-data\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513816 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4778529-f7d0-482b-bd67-003aaa7ca0ae-logs\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.513851 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-logs\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.518829 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.523198 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a34a979c-9102-471f-9678-048fd5198cb8" (UID: "a34a979c-9102-471f-9678-048fd5198cb8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.529561 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a34a979c-9102-471f-9678-048fd5198cb8-kube-api-access-z95mm" (OuterVolumeSpecName: "kube-api-access-z95mm") pod "a34a979c-9102-471f-9678-048fd5198cb8" (UID: "a34a979c-9102-471f-9678-048fd5198cb8"). InnerVolumeSpecName "kube-api-access-z95mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.554420 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a34a979c-9102-471f-9678-048fd5198cb8" (UID: "a34a979c-9102-471f-9678-048fd5198cb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615252 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-logs\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615337 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615358 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4778529-f7d0-482b-bd67-003aaa7ca0ae-config-data\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615380 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5wsl\" (UniqueName: \"kubernetes.io/projected/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-kube-api-access-m5wsl\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615406 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615423 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpx9g\" (UniqueName: \"kubernetes.io/projected/7fdaa495-6cde-409a-871a-e334ca3f2a91-kube-api-access-cpx9g\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615486 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-config-data\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615514 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k94vz\" (UniqueName: \"kubernetes.io/projected/d4778529-f7d0-482b-bd67-003aaa7ca0ae-kube-api-access-k94vz\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615540 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615559 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fdaa495-6cde-409a-871a-e334ca3f2a91-logs\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615590 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4778529-f7d0-482b-bd67-003aaa7ca0ae-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615610 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615631 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-config-data\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615651 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4778529-f7d0-482b-bd67-003aaa7ca0ae-logs\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615706 5012 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615718 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a34a979c-9102-471f-9678-048fd5198cb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.615729 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z95mm\" (UniqueName: \"kubernetes.io/projected/a34a979c-9102-471f-9678-048fd5198cb8-kube-api-access-z95mm\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.616171 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4778529-f7d0-482b-bd67-003aaa7ca0ae-logs\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.616490 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-logs\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.621262 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.621631 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fdaa495-6cde-409a-871a-e334ca3f2a91-logs\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.627756 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.629415 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4778529-f7d0-482b-bd67-003aaa7ca0ae-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.631940 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4778529-f7d0-482b-bd67-003aaa7ca0ae-config-data\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.632708 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-config-data\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.634104 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.637910 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-config-data\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.640840 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5wsl\" (UniqueName: \"kubernetes.io/projected/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-kube-api-access-m5wsl\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.663883 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k94vz\" (UniqueName: \"kubernetes.io/projected/d4778529-f7d0-482b-bd67-003aaa7ca0ae-kube-api-access-k94vz\") pod \"watcher-applier-0\" (UID: \"d4778529-f7d0-482b-bd67-003aaa7ca0ae\") " pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.663997 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpx9g\" (UniqueName: \"kubernetes.io/projected/7fdaa495-6cde-409a-871a-e334ca3f2a91-kube-api-access-cpx9g\") pod \"watcher-decision-engine-0\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.664996 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.785876 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.807445 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.821240 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Feb 19 05:43:51 crc kubenswrapper[5012]: I0219 05:43:51.882678 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75cc7d9585-x8r8l" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.157:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.157:8080: connect: connection refused" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.126720 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jzclm" event={"ID":"a34a979c-9102-471f-9678-048fd5198cb8","Type":"ContainerDied","Data":"15ee0e6aea238f0e16da222d8f4f49d691f91234f9216b9e8070275343d6a969"} Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.127110 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15ee0e6aea238f0e16da222d8f4f49d691f91234f9216b9e8070275343d6a969" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.127001 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jzclm" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.135177 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b","Type":"ContainerStarted","Data":"8a02fea3b4cd70626ac243cec71c2d7a481574c8f18cffc243a46c68a245c413"} Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.136022 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="ceilometer-central-agent" containerID="cri-o://e454f72d42b6df4ccbea155823e52fa4dbc71ac17be418579910450da7af968d" gracePeriod=30 Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.136189 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.136217 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="sg-core" containerID="cri-o://bdf4b7c244764dd2879106070ed07ec4228686361067f77e4b0e731b44af052c" gracePeriod=30 Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.136154 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="proxy-httpd" containerID="cri-o://8a02fea3b4cd70626ac243cec71c2d7a481574c8f18cffc243a46c68a245c413" gracePeriod=30 Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.136229 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="ceilometer-notification-agent" containerID="cri-o://5011a2da1b6766de9dceb07b094e5e5b90457583e5b1d7f21e441d5bc980ef81" gracePeriod=30 Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.369683 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.599200762 podStartE2EDuration="1m5.36966736s" podCreationTimestamp="2026-02-19 05:42:47 +0000 UTC" firstStartedPulling="2026-02-19 05:42:48.786483529 +0000 UTC m=+1064.819806098" lastFinishedPulling="2026-02-19 05:43:51.556950127 +0000 UTC m=+1127.590272696" observedRunningTime="2026-02-19 05:43:52.172913915 +0000 UTC m=+1128.206236484" watchObservedRunningTime="2026-02-19 05:43:52.36966736 +0000 UTC m=+1128.402989929" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.374696 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-779bfc8b79-ffj7v"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.376358 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.382224 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.382427 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-hg9kp" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.382651 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.391795 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.407225 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-779bfc8b79-ffj7v"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.437292 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9133f0f1-2d9e-462e-ba56-8a206f61bd03-config-data\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.437353 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9133f0f1-2d9e-462e-ba56-8a206f61bd03-combined-ca-bundle\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.437470 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9133f0f1-2d9e-462e-ba56-8a206f61bd03-config-data-custom\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.437504 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9133f0f1-2d9e-462e-ba56-8a206f61bd03-logs\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.437547 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsmj9\" (UniqueName: \"kubernetes.io/projected/9133f0f1-2d9e-462e-ba56-8a206f61bd03-kube-api-access-bsmj9\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.453417 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5bb75756b-hd4xs"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.454990 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.460673 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 19 05:43:52 crc kubenswrapper[5012]: W0219 05:43:52.505343 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4778529_f7d0_482b_bd67_003aaa7ca0ae.slice/crio-8b9702811b20b1bd747f944495d8fb979a7f48a2280de5fb9506d28c3b15880e WatchSource:0}: Error finding container 8b9702811b20b1bd747f944495d8fb979a7f48a2280de5fb9506d28c3b15880e: Status 404 returned error can't find the container with id 8b9702811b20b1bd747f944495d8fb979a7f48a2280de5fb9506d28c3b15880e Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.519121 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6788477597-b25r4"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.521254 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.536728 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5bb75756b-hd4xs"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.542592 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-config\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.542638 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-nb\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.542677 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9133f0f1-2d9e-462e-ba56-8a206f61bd03-config-data-custom\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543176 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee216ad2-2baf-4bba-a3fe-81acf9218af0-config-data-custom\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543244 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee216ad2-2baf-4bba-a3fe-81acf9218af0-config-data\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543272 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9133f0f1-2d9e-462e-ba56-8a206f61bd03-logs\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543380 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-sb\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543422 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsmj9\" (UniqueName: \"kubernetes.io/projected/9133f0f1-2d9e-462e-ba56-8a206f61bd03-kube-api-access-bsmj9\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543469 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-546k4\" (UniqueName: \"kubernetes.io/projected/ee216ad2-2baf-4bba-a3fe-81acf9218af0-kube-api-access-546k4\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543495 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9133f0f1-2d9e-462e-ba56-8a206f61bd03-config-data\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543512 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhgbb\" (UniqueName: \"kubernetes.io/projected/d4384807-a690-4e84-8b2f-d1f82a6e801b-kube-api-access-nhgbb\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543564 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9133f0f1-2d9e-462e-ba56-8a206f61bd03-combined-ca-bundle\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543644 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-svc\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543711 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee216ad2-2baf-4bba-a3fe-81acf9218af0-logs\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543733 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-swift-storage-0\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.543783 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee216ad2-2baf-4bba-a3fe-81acf9218af0-combined-ca-bundle\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.545007 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9133f0f1-2d9e-462e-ba56-8a206f61bd03-logs\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.551461 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.553955 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9133f0f1-2d9e-462e-ba56-8a206f61bd03-config-data-custom\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.561249 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.563169 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9133f0f1-2d9e-462e-ba56-8a206f61bd03-combined-ca-bundle\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.564110 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9133f0f1-2d9e-462e-ba56-8a206f61bd03-config-data\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.580202 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsmj9\" (UniqueName: \"kubernetes.io/projected/9133f0f1-2d9e-462e-ba56-8a206f61bd03-kube-api-access-bsmj9\") pod \"barbican-worker-779bfc8b79-ffj7v\" (UID: \"9133f0f1-2d9e-462e-ba56-8a206f61bd03\") " pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.587710 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6788477597-b25r4"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.647923 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-config\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.648009 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-nb\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.648097 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee216ad2-2baf-4bba-a3fe-81acf9218af0-config-data-custom\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.648139 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee216ad2-2baf-4bba-a3fe-81acf9218af0-config-data\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.648202 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-sb\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.648249 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-546k4\" (UniqueName: \"kubernetes.io/projected/ee216ad2-2baf-4bba-a3fe-81acf9218af0-kube-api-access-546k4\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.648272 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhgbb\" (UniqueName: \"kubernetes.io/projected/d4384807-a690-4e84-8b2f-d1f82a6e801b-kube-api-access-nhgbb\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.648355 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-svc\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.648408 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee216ad2-2baf-4bba-a3fe-81acf9218af0-logs\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.648432 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-swift-storage-0\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.648676 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee216ad2-2baf-4bba-a3fe-81acf9218af0-combined-ca-bundle\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.655471 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-sb\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.657819 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-nb\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.697528 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ee216ad2-2baf-4bba-a3fe-81acf9218af0-config-data-custom\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.699335 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-svc\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.700294 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee216ad2-2baf-4bba-a3fe-81acf9218af0-config-data\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.700803 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-swift-storage-0\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.700914 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee216ad2-2baf-4bba-a3fe-81acf9218af0-logs\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.701255 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-config\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.701778 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee216ad2-2baf-4bba-a3fe-81acf9218af0-combined-ca-bundle\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.712587 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-546k4\" (UniqueName: \"kubernetes.io/projected/ee216ad2-2baf-4bba-a3fe-81acf9218af0-kube-api-access-546k4\") pod \"barbican-keystone-listener-5bb75756b-hd4xs\" (UID: \"ee216ad2-2baf-4bba-a3fe-81acf9218af0\") " pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.712975 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-779bfc8b79-ffj7v" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.716745 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhgbb\" (UniqueName: \"kubernetes.io/projected/d4384807-a690-4e84-8b2f-d1f82a6e801b-kube-api-access-nhgbb\") pod \"dnsmasq-dns-6788477597-b25r4\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.725648 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-778557f86b-hp4xf"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.727885 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.732773 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.753624 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data-custom\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.753741 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j52pf\" (UniqueName: \"kubernetes.io/projected/6dfce017-0fe6-4613-910b-2c0f88af8bb2-kube-api-access-j52pf\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.754042 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.754160 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dfce017-0fe6-4613-910b-2c0f88af8bb2-logs\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.754258 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-combined-ca-bundle\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.772366 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-778557f86b-hp4xf"] Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.803852 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.862466 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dfce017-0fe6-4613-910b-2c0f88af8bb2-logs\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.862575 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-combined-ca-bundle\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.862652 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data-custom\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.862679 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j52pf\" (UniqueName: \"kubernetes.io/projected/6dfce017-0fe6-4613-910b-2c0f88af8bb2-kube-api-access-j52pf\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.862751 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.863534 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dfce017-0fe6-4613-910b-2c0f88af8bb2-logs\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.874485 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-combined-ca-bundle\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.875701 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.882012 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data-custom\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.887096 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j52pf\" (UniqueName: \"kubernetes.io/projected/6dfce017-0fe6-4613-910b-2c0f88af8bb2-kube-api-access-j52pf\") pod \"barbican-api-778557f86b-hp4xf\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.888428 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:52 crc kubenswrapper[5012]: I0219 05:43:52.941049 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.167458 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7fdaa495-6cde-409a-871a-e334ca3f2a91","Type":"ContainerStarted","Data":"27cdd4f4a5ee55d08e9db9c6e3380ff5674b5137557956c3e1a7be05a457c3b6"} Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.172525 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"17c5eb4a-b8b3-4178-b5a0-2a37211266e6","Type":"ContainerStarted","Data":"65be4651ae750a28ef010be6e5423125eee000964a57e57affa6249b22b2eb91"} Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.172584 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"17c5eb4a-b8b3-4178-b5a0-2a37211266e6","Type":"ContainerStarted","Data":"1f8ff58170fed0be8d7680ffb942663aaa5ec3f1c388578dbd28c9e5432c8ac1"} Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.175429 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"d4778529-f7d0-482b-bd67-003aaa7ca0ae","Type":"ContainerStarted","Data":"8b9702811b20b1bd747f944495d8fb979a7f48a2280de5fb9506d28c3b15880e"} Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.188904 5012 generic.go:334] "Generic (PLEG): container finished" podID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerID="8a02fea3b4cd70626ac243cec71c2d7a481574c8f18cffc243a46c68a245c413" exitCode=0 Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.188927 5012 generic.go:334] "Generic (PLEG): container finished" podID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerID="bdf4b7c244764dd2879106070ed07ec4228686361067f77e4b0e731b44af052c" exitCode=2 Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.188936 5012 generic.go:334] "Generic (PLEG): container finished" podID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerID="e454f72d42b6df4ccbea155823e52fa4dbc71ac17be418579910450da7af968d" exitCode=0 Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.188955 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b","Type":"ContainerDied","Data":"8a02fea3b4cd70626ac243cec71c2d7a481574c8f18cffc243a46c68a245c413"} Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.188976 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b","Type":"ContainerDied","Data":"bdf4b7c244764dd2879106070ed07ec4228686361067f77e4b0e731b44af052c"} Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.188987 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b","Type":"ContainerDied","Data":"e454f72d42b6df4ccbea155823e52fa4dbc71ac17be418579910450da7af968d"} Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.295226 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-779bfc8b79-ffj7v"] Feb 19 05:43:53 crc kubenswrapper[5012]: W0219 05:43:53.304815 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9133f0f1_2d9e_462e_ba56_8a206f61bd03.slice/crio-abf890b7e383d28b08223c36e7492c70ca45beb20890d51bf20f4f69a23f948d WatchSource:0}: Error finding container abf890b7e383d28b08223c36e7492c70ca45beb20890d51bf20f4f69a23f948d: Status 404 returned error can't find the container with id abf890b7e383d28b08223c36e7492c70ca45beb20890d51bf20f4f69a23f948d Feb 19 05:43:53 crc kubenswrapper[5012]: W0219 05:43:53.328143 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee216ad2_2baf_4bba_a3fe_81acf9218af0.slice/crio-704d494d5a0e851513f788eaa7222c23d808d93c67dca6c0698a6c35c566b0f6 WatchSource:0}: Error finding container 704d494d5a0e851513f788eaa7222c23d808d93c67dca6c0698a6c35c566b0f6: Status 404 returned error can't find the container with id 704d494d5a0e851513f788eaa7222c23d808d93c67dca6c0698a6c35c566b0f6 Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.329425 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5bb75756b-hd4xs"] Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.336173 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-778557f86b-hp4xf"] Feb 19 05:43:53 crc kubenswrapper[5012]: I0219 05:43:53.420808 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6788477597-b25r4"] Feb 19 05:43:53 crc kubenswrapper[5012]: W0219 05:43:53.428608 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4384807_a690_4e84_8b2f_d1f82a6e801b.slice/crio-6d270cd40f45bfefaf4186556b1652f3082a09963c33d3dc4823e1b2c33258e1 WatchSource:0}: Error finding container 6d270cd40f45bfefaf4186556b1652f3082a09963c33d3dc4823e1b2c33258e1: Status 404 returned error can't find the container with id 6d270cd40f45bfefaf4186556b1652f3082a09963c33d3dc4823e1b2c33258e1 Feb 19 05:43:53 crc kubenswrapper[5012]: E0219 05:43:53.699681 5012 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb98c972c_b350_44a1_a7c5_028914fe7bfc.slice/crio-conmon-8dfd0224f4b707b6bfc0133d1f07ea378c585adcdbe5ef8ea62dd0f00fb98923.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb98c972c_b350_44a1_a7c5_028914fe7bfc.slice/crio-8dfd0224f4b707b6bfc0133d1f07ea378c585adcdbe5ef8ea62dd0f00fb98923.scope\": RecentStats: unable to find data in memory cache]" Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.204344 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" event={"ID":"ee216ad2-2baf-4bba-a3fe-81acf9218af0","Type":"ContainerStarted","Data":"704d494d5a0e851513f788eaa7222c23d808d93c67dca6c0698a6c35c566b0f6"} Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.205993 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"17c5eb4a-b8b3-4178-b5a0-2a37211266e6","Type":"ContainerStarted","Data":"4c7e7897254d29f17ce8fe214986663a24ab7ca2a73051f5e809d6f1daf31a29"} Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.206256 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.207286 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-778557f86b-hp4xf" event={"ID":"6dfce017-0fe6-4613-910b-2c0f88af8bb2","Type":"ContainerStarted","Data":"fe85e93188d20a0757f4ff89e6ad6e7cd4a5a7fc9569c748b0fe68bce7f50e89"} Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.207341 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-778557f86b-hp4xf" event={"ID":"6dfce017-0fe6-4613-910b-2c0f88af8bb2","Type":"ContainerStarted","Data":"831c1b2e39b299e04f560adb31739eb0da9f5a5165d710984ac8d2ab457658e9"} Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.208723 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-779bfc8b79-ffj7v" event={"ID":"9133f0f1-2d9e-462e-ba56-8a206f61bd03","Type":"ContainerStarted","Data":"abf890b7e383d28b08223c36e7492c70ca45beb20890d51bf20f4f69a23f948d"} Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.213996 5012 generic.go:334] "Generic (PLEG): container finished" podID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerID="5011a2da1b6766de9dceb07b094e5e5b90457583e5b1d7f21e441d5bc980ef81" exitCode=0 Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.214078 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b","Type":"ContainerDied","Data":"5011a2da1b6766de9dceb07b094e5e5b90457583e5b1d7f21e441d5bc980ef81"} Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.217982 5012 generic.go:334] "Generic (PLEG): container finished" podID="b98c972c-b350-44a1-a7c5-028914fe7bfc" containerID="8dfd0224f4b707b6bfc0133d1f07ea378c585adcdbe5ef8ea62dd0f00fb98923" exitCode=0 Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.218091 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xj7dw" event={"ID":"b98c972c-b350-44a1-a7c5-028914fe7bfc","Type":"ContainerDied","Data":"8dfd0224f4b707b6bfc0133d1f07ea378c585adcdbe5ef8ea62dd0f00fb98923"} Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.219767 5012 generic.go:334] "Generic (PLEG): container finished" podID="d4384807-a690-4e84-8b2f-d1f82a6e801b" containerID="d5440c73e1cd6d63cff9dfa2d45367d2d8fced5e8574ebcc35f43099ef7046cb" exitCode=0 Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.219801 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6788477597-b25r4" event={"ID":"d4384807-a690-4e84-8b2f-d1f82a6e801b","Type":"ContainerDied","Data":"d5440c73e1cd6d63cff9dfa2d45367d2d8fced5e8574ebcc35f43099ef7046cb"} Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.219825 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6788477597-b25r4" event={"ID":"d4384807-a690-4e84-8b2f-d1f82a6e801b","Type":"ContainerStarted","Data":"6d270cd40f45bfefaf4186556b1652f3082a09963c33d3dc4823e1b2c33258e1"} Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.244358 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.24433289 podStartE2EDuration="3.24433289s" podCreationTimestamp="2026-02-19 05:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:54.224930242 +0000 UTC m=+1130.258252811" watchObservedRunningTime="2026-02-19 05:43:54.24433289 +0000 UTC m=+1130.277655459" Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.925237 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7f669f7d76-2qg4s"] Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.926924 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.928745 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.932946 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 19 05:43:54 crc kubenswrapper[5012]: I0219 05:43:54.948534 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f669f7d76-2qg4s"] Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.036721 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-config-data-custom\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.037411 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-internal-tls-certs\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.037461 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrz6n\" (UniqueName: \"kubernetes.io/projected/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-kube-api-access-xrz6n\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.037507 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-combined-ca-bundle\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.037800 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-logs\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.038182 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-config-data\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.038356 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-public-tls-certs\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.146186 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-combined-ca-bundle\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.146774 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-logs\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.147013 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-config-data\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.147135 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-public-tls-certs\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.147684 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-config-data-custom\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.147837 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-internal-tls-certs\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.147905 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrz6n\" (UniqueName: \"kubernetes.io/projected/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-kube-api-access-xrz6n\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.148754 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-logs\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.154376 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-config-data-custom\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.154413 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-public-tls-certs\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.154568 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-internal-tls-certs\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.161790 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-combined-ca-bundle\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.167455 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-config-data\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.174874 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrz6n\" (UniqueName: \"kubernetes.io/projected/875bbaf1-6c43-4474-9f7b-8202b2d5ee1c-kube-api-access-xrz6n\") pod \"barbican-api-7f669f7d76-2qg4s\" (UID: \"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c\") " pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.244538 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.778546 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.879073 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-config-data\") pod \"b98c972c-b350-44a1-a7c5-028914fe7bfc\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.879555 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-scripts\") pod \"b98c972c-b350-44a1-a7c5-028914fe7bfc\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.879575 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b98c972c-b350-44a1-a7c5-028914fe7bfc-etc-machine-id\") pod \"b98c972c-b350-44a1-a7c5-028914fe7bfc\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.879607 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-combined-ca-bundle\") pod \"b98c972c-b350-44a1-a7c5-028914fe7bfc\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.879631 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sghmp\" (UniqueName: \"kubernetes.io/projected/b98c972c-b350-44a1-a7c5-028914fe7bfc-kube-api-access-sghmp\") pod \"b98c972c-b350-44a1-a7c5-028914fe7bfc\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.879730 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-db-sync-config-data\") pod \"b98c972c-b350-44a1-a7c5-028914fe7bfc\" (UID: \"b98c972c-b350-44a1-a7c5-028914fe7bfc\") " Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.880021 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b98c972c-b350-44a1-a7c5-028914fe7bfc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b98c972c-b350-44a1-a7c5-028914fe7bfc" (UID: "b98c972c-b350-44a1-a7c5-028914fe7bfc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.882286 5012 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b98c972c-b350-44a1-a7c5-028914fe7bfc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.889898 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b98c972c-b350-44a1-a7c5-028914fe7bfc" (UID: "b98c972c-b350-44a1-a7c5-028914fe7bfc"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.891972 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b98c972c-b350-44a1-a7c5-028914fe7bfc-kube-api-access-sghmp" (OuterVolumeSpecName: "kube-api-access-sghmp") pod "b98c972c-b350-44a1-a7c5-028914fe7bfc" (UID: "b98c972c-b350-44a1-a7c5-028914fe7bfc"). InnerVolumeSpecName "kube-api-access-sghmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.892838 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-scripts" (OuterVolumeSpecName: "scripts") pod "b98c972c-b350-44a1-a7c5-028914fe7bfc" (UID: "b98c972c-b350-44a1-a7c5-028914fe7bfc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.931066 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b98c972c-b350-44a1-a7c5-028914fe7bfc" (UID: "b98c972c-b350-44a1-a7c5-028914fe7bfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.964818 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-config-data" (OuterVolumeSpecName: "config-data") pod "b98c972c-b350-44a1-a7c5-028914fe7bfc" (UID: "b98c972c-b350-44a1-a7c5-028914fe7bfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.985290 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.985341 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.985351 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.985362 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sghmp\" (UniqueName: \"kubernetes.io/projected/b98c972c-b350-44a1-a7c5-028914fe7bfc-kube-api-access-sghmp\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:55 crc kubenswrapper[5012]: I0219 05:43:55.985384 5012 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b98c972c-b350-44a1-a7c5-028914fe7bfc-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.280699 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-778557f86b-hp4xf" event={"ID":"6dfce017-0fe6-4613-910b-2c0f88af8bb2","Type":"ContainerStarted","Data":"0a1428fe2110ceec4a472e101ab178eb05366af10098eb515e5229853c308ba9"} Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.282784 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.282821 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.290781 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-779bfc8b79-ffj7v" event={"ID":"9133f0f1-2d9e-462e-ba56-8a206f61bd03","Type":"ContainerStarted","Data":"c43bb3f9e5482ef4ca13a42f07ab087f195a8d34284dcae978d735032438fc88"} Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.293725 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b","Type":"ContainerDied","Data":"7e8d6baa89d2887533fedd350653f8112826dc19a88f8494ecc19699d4368a44"} Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.293761 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e8d6baa89d2887533fedd350653f8112826dc19a88f8494ecc19699d4368a44" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.308959 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xj7dw" event={"ID":"b98c972c-b350-44a1-a7c5-028914fe7bfc","Type":"ContainerDied","Data":"a84681fa37d45c4925f780e8954023bd4c066ed1cbb2bb7d3fe3e2f3209e4c8b"} Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.309021 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a84681fa37d45c4925f780e8954023bd4c066ed1cbb2bb7d3fe3e2f3209e4c8b" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.309093 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xj7dw" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.322923 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6788477597-b25r4" event={"ID":"d4384807-a690-4e84-8b2f-d1f82a6e801b","Type":"ContainerStarted","Data":"21ac8b4f6fffa511d4235a3c327d3a8cd35ac9450d983816320d5195b11ee8bb"} Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.323711 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.343281 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-778557f86b-hp4xf" podStartSLOduration=4.343255725 podStartE2EDuration="4.343255725s" podCreationTimestamp="2026-02-19 05:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:56.314981544 +0000 UTC m=+1132.348304113" watchObservedRunningTime="2026-02-19 05:43:56.343255725 +0000 UTC m=+1132.376578294" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.394415 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6788477597-b25r4" podStartSLOduration=4.39439228 podStartE2EDuration="4.39439228s" podCreationTimestamp="2026-02-19 05:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:56.367260768 +0000 UTC m=+1132.400583337" watchObservedRunningTime="2026-02-19 05:43:56.39439228 +0000 UTC m=+1132.427714849" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.420264 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f669f7d76-2qg4s"] Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.445711 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.498992 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-run-httpd\") pod \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.499109 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-log-httpd\") pod \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.499383 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-scripts\") pod \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.499457 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-combined-ca-bundle\") pod \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.499492 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qkmd\" (UniqueName: \"kubernetes.io/projected/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-kube-api-access-6qkmd\") pod \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.499602 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-config-data\") pod \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.499665 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-sg-core-conf-yaml\") pod \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\" (UID: \"6bd6edb4-0376-458f-bb9d-f24e5e7ff47b\") " Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.511802 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-kube-api-access-6qkmd" (OuterVolumeSpecName: "kube-api-access-6qkmd") pod "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" (UID: "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b"). InnerVolumeSpecName "kube-api-access-6qkmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.512243 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-scripts" (OuterVolumeSpecName: "scripts") pod "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" (UID: "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.512710 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" (UID: "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.513052 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" (UID: "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.605588 5012 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.605620 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.605629 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qkmd\" (UniqueName: \"kubernetes.io/projected/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-kube-api-access-6qkmd\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.605640 5012 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.609488 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" (UID: "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.707520 5012 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.787647 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 05:43:56 crc kubenswrapper[5012]: E0219 05:43:56.788544 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="ceilometer-notification-agent" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.788564 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="ceilometer-notification-agent" Feb 19 05:43:56 crc kubenswrapper[5012]: E0219 05:43:56.788588 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b98c972c-b350-44a1-a7c5-028914fe7bfc" containerName="cinder-db-sync" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.788596 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b98c972c-b350-44a1-a7c5-028914fe7bfc" containerName="cinder-db-sync" Feb 19 05:43:56 crc kubenswrapper[5012]: E0219 05:43:56.788624 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="proxy-httpd" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.788648 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="proxy-httpd" Feb 19 05:43:56 crc kubenswrapper[5012]: E0219 05:43:56.788661 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="sg-core" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.788667 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="sg-core" Feb 19 05:43:56 crc kubenswrapper[5012]: E0219 05:43:56.788694 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="ceilometer-central-agent" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.788703 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="ceilometer-central-agent" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.789120 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="sg-core" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.789149 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="b98c972c-b350-44a1-a7c5-028914fe7bfc" containerName="cinder-db-sync" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.789180 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="ceilometer-central-agent" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.789200 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="proxy-httpd" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.789229 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" containerName="ceilometer-notification-agent" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.805592 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.805969 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.806135 5012 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.807750 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.811555 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.811755 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-c2ldt" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.811826 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.819746 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.863446 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6788477597-b25r4"] Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.887566 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757c7596dc-4ccqz"] Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.889146 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.917365 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757c7596dc-4ccqz"] Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.920588 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9c1c12b-f055-417b-9300-706f98b0f8cc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.920624 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-scripts\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.920650 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.920677 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cch2x\" (UniqueName: \"kubernetes.io/projected/a9c1c12b-f055-417b-9300-706f98b0f8cc-kube-api-access-cch2x\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.920728 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:56 crc kubenswrapper[5012]: I0219 05:43:56.920753 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022402 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022461 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022518 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-sb\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022547 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-config\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022586 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-svc\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022614 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-swift-storage-0\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022636 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9c1c12b-f055-417b-9300-706f98b0f8cc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022655 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-scripts\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022679 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022712 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-nb\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022727 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cch2x\" (UniqueName: \"kubernetes.io/projected/a9c1c12b-f055-417b-9300-706f98b0f8cc-kube-api-access-cch2x\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.022745 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wg28\" (UniqueName: \"kubernetes.io/projected/6c5e24dc-215e-4f19-8cf6-241bf57648f9-kube-api-access-9wg28\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.023690 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9c1c12b-f055-417b-9300-706f98b0f8cc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.042914 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.048534 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-scripts\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.051270 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.055630 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.059836 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cch2x\" (UniqueName: \"kubernetes.io/projected/a9c1c12b-f055-417b-9300-706f98b0f8cc-kube-api-access-cch2x\") pod \"cinder-scheduler-0\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.094592 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.096804 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.099534 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.124234 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-nb\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.124272 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wg28\" (UniqueName: \"kubernetes.io/projected/6c5e24dc-215e-4f19-8cf6-241bf57648f9-kube-api-access-9wg28\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.124392 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-sb\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.124421 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-config\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.124461 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-svc\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.124506 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-swift-storage-0\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.125617 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-nb\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.126130 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-swift-storage-0\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.127608 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-svc\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.128231 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-config\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.130093 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-sb\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.133021 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.135138 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" (UID: "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.148459 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wg28\" (UniqueName: \"kubernetes.io/projected/6c5e24dc-215e-4f19-8cf6-241bf57648f9-kube-api-access-9wg28\") pod \"dnsmasq-dns-757c7596dc-4ccqz\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.148901 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.223941 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-config-data" (OuterVolumeSpecName: "config-data") pod "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" (UID: "6bd6edb4-0376-458f-bb9d-f24e5e7ff47b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.225804 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.226094 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.226143 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6b154229-6752-44d3-8b53-96147254af19-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.226164 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data-custom\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.226190 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-scripts\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.226227 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b154229-6752-44d3-8b53-96147254af19-logs\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.226244 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.226265 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxflh\" (UniqueName: \"kubernetes.io/projected/6b154229-6752-44d3-8b53-96147254af19-kube-api-access-rxflh\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.226397 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.226411 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.332475 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.334468 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6b154229-6752-44d3-8b53-96147254af19-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.334593 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data-custom\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.334687 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-scripts\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.334758 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b154229-6752-44d3-8b53-96147254af19-logs\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.334824 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.334890 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxflh\" (UniqueName: \"kubernetes.io/projected/6b154229-6752-44d3-8b53-96147254af19-kube-api-access-rxflh\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.336409 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6b154229-6752-44d3-8b53-96147254af19-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.336838 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b154229-6752-44d3-8b53-96147254af19-logs\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.338509 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.345515 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.347229 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-scripts\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.352264 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data-custom\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.354727 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.366779 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxflh\" (UniqueName: \"kubernetes.io/projected/6b154229-6752-44d3-8b53-96147254af19-kube-api-access-rxflh\") pod \"cinder-api-0\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.380621 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7fdaa495-6cde-409a-871a-e334ca3f2a91","Type":"ContainerStarted","Data":"ba936c2a2295accf188d98dabc618f0a4eb4fcc0b863a622cffddbfebb246fc3"} Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.404474 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" event={"ID":"ee216ad2-2baf-4bba-a3fe-81acf9218af0","Type":"ContainerStarted","Data":"e232d1d5952ee862af66af4dfaf70596d6f99efe0dff23de65977b02a1393257"} Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.413516 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=3.097796548 podStartE2EDuration="6.41350214s" podCreationTimestamp="2026-02-19 05:43:51 +0000 UTC" firstStartedPulling="2026-02-19 05:43:52.535976049 +0000 UTC m=+1128.569298618" lastFinishedPulling="2026-02-19 05:43:55.851681641 +0000 UTC m=+1131.885004210" observedRunningTime="2026-02-19 05:43:57.40513688 +0000 UTC m=+1133.438459449" watchObservedRunningTime="2026-02-19 05:43:57.41350214 +0000 UTC m=+1133.446824709" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.438173 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-779bfc8b79-ffj7v" event={"ID":"9133f0f1-2d9e-462e-ba56-8a206f61bd03","Type":"ContainerStarted","Data":"86550ff652eb777d0ef3deb2390b7cf98e95a512390779f2190d07e8bf35ef59"} Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.465772 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-779bfc8b79-ffj7v" podStartSLOduration=2.9357263749999998 podStartE2EDuration="5.465751603s" podCreationTimestamp="2026-02-19 05:43:52 +0000 UTC" firstStartedPulling="2026-02-19 05:43:53.321669624 +0000 UTC m=+1129.354992193" lastFinishedPulling="2026-02-19 05:43:55.851694852 +0000 UTC m=+1131.885017421" observedRunningTime="2026-02-19 05:43:57.463337852 +0000 UTC m=+1133.496660421" watchObservedRunningTime="2026-02-19 05:43:57.465751603 +0000 UTC m=+1133.499074172" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.474912 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"d4778529-f7d0-482b-bd67-003aaa7ca0ae","Type":"ContainerStarted","Data":"047631b0cd4cbda2df14045f1f332c69a1e0680f36346341dbc4eecc5870407f"} Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.529890 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.535562 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.537789 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f669f7d76-2qg4s" event={"ID":"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c","Type":"ContainerStarted","Data":"873e505cdb32ddd1a1e8218374bdf9f511db9dbe463d5a652ec922bdd0b36c47"} Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.537829 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f669f7d76-2qg4s" event={"ID":"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c","Type":"ContainerStarted","Data":"2dcecd941288a9e0ccb5ed44503be98025613ca3b1582d3509bf0a5378ca32f5"} Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.628922 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=3.287505265 podStartE2EDuration="6.628902073s" podCreationTimestamp="2026-02-19 05:43:51 +0000 UTC" firstStartedPulling="2026-02-19 05:43:52.510328555 +0000 UTC m=+1128.543651134" lastFinishedPulling="2026-02-19 05:43:55.851725373 +0000 UTC m=+1131.885047942" observedRunningTime="2026-02-19 05:43:57.552943734 +0000 UTC m=+1133.586266303" watchObservedRunningTime="2026-02-19 05:43:57.628902073 +0000 UTC m=+1133.662224642" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.672135 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.729113 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.750700 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.753715 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.761398 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.761647 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.762381 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.767130 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 05:43:57 crc kubenswrapper[5012]: W0219 05:43:57.800744 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9c1c12b_f055_417b_9300_706f98b0f8cc.slice/crio-700c4e558c2fc29ae4be5133cfa56a73c8a1d1f1fb5ea15ea68c90c99124dbe1 WatchSource:0}: Error finding container 700c4e558c2fc29ae4be5133cfa56a73c8a1d1f1fb5ea15ea68c90c99124dbe1: Status 404 returned error can't find the container with id 700c4e558c2fc29ae4be5133cfa56a73c8a1d1f1fb5ea15ea68c90c99124dbe1 Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.853881 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-config-data\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.854338 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52m69\" (UniqueName: \"kubernetes.io/projected/236f420e-8855-41f8-8b25-813be7b28799-kube-api-access-52m69\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.854367 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.854433 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.854458 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-log-httpd\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.854688 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-run-httpd\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.854716 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-scripts\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.887035 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757c7596dc-4ccqz"] Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.956433 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-config-data\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.956520 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52m69\" (UniqueName: \"kubernetes.io/projected/236f420e-8855-41f8-8b25-813be7b28799-kube-api-access-52m69\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.956550 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.956579 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.956602 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-log-httpd\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.956658 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-run-httpd\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.956674 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-scripts\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.960480 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-log-httpd\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.960829 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-scripts\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.961059 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-run-httpd\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.973058 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.973487 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-config-data\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:57 crc kubenswrapper[5012]: I0219 05:43:57.980919 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.000095 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52m69\" (UniqueName: \"kubernetes.io/projected/236f420e-8855-41f8-8b25-813be7b28799-kube-api-access-52m69\") pod \"ceilometer-0\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " pod="openstack/ceilometer-0" Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.129678 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.193254 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.562018 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" event={"ID":"ee216ad2-2baf-4bba-a3fe-81acf9218af0","Type":"ContainerStarted","Data":"a698fa179e07b7f282602fdb0616fddbf0515fd3c161369bc45c4a476a8b36fa"} Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.570400 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" event={"ID":"6c5e24dc-215e-4f19-8cf6-241bf57648f9","Type":"ContainerStarted","Data":"b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8"} Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.570435 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" event={"ID":"6c5e24dc-215e-4f19-8cf6-241bf57648f9","Type":"ContainerStarted","Data":"ead40496902b159e9bebd9ba1a479551b8997a76aa96d1285d684eafe66d05a5"} Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.575090 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a9c1c12b-f055-417b-9300-706f98b0f8cc","Type":"ContainerStarted","Data":"700c4e558c2fc29ae4be5133cfa56a73c8a1d1f1fb5ea15ea68c90c99124dbe1"} Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.588121 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5bb75756b-hd4xs" podStartSLOduration=4.065359082 podStartE2EDuration="6.588101907s" podCreationTimestamp="2026-02-19 05:43:52 +0000 UTC" firstStartedPulling="2026-02-19 05:43:53.331896781 +0000 UTC m=+1129.365219350" lastFinishedPulling="2026-02-19 05:43:55.854639606 +0000 UTC m=+1131.887962175" observedRunningTime="2026-02-19 05:43:58.580947538 +0000 UTC m=+1134.614270107" watchObservedRunningTime="2026-02-19 05:43:58.588101907 +0000 UTC m=+1134.621424476" Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.590208 5012 generic.go:334] "Generic (PLEG): container finished" podID="d5eb71f6-31df-418a-98dd-11668ff38825" containerID="1740dd45d12f4fba32d28fe0edd137672168109214e3411aa79b0b01fe5420c4" exitCode=137 Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.590275 5012 generic.go:334] "Generic (PLEG): container finished" podID="d5eb71f6-31df-418a-98dd-11668ff38825" containerID="0edf70792244ac07bbfc8312a7939b51e2c1f6efdd9a9026a76bb21f0665c246" exitCode=137 Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.590417 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c45b5647f-k799c" event={"ID":"d5eb71f6-31df-418a-98dd-11668ff38825","Type":"ContainerDied","Data":"1740dd45d12f4fba32d28fe0edd137672168109214e3411aa79b0b01fe5420c4"} Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.590455 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c45b5647f-k799c" event={"ID":"d5eb71f6-31df-418a-98dd-11668ff38825","Type":"ContainerDied","Data":"0edf70792244ac07bbfc8312a7939b51e2c1f6efdd9a9026a76bb21f0665c246"} Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.593599 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f669f7d76-2qg4s" event={"ID":"875bbaf1-6c43-4474-9f7b-8202b2d5ee1c","Type":"ContainerStarted","Data":"f7fd69acd3ad1cf95ced27b912d252abccdac551c1477e1dec5ef9901f79fef6"} Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.593879 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6788477597-b25r4" podUID="d4384807-a690-4e84-8b2f-d1f82a6e801b" containerName="dnsmasq-dns" containerID="cri-o://21ac8b4f6fffa511d4235a3c327d3a8cd35ac9450d983816320d5195b11ee8bb" gracePeriod=10 Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.594320 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.595127 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:43:58 crc kubenswrapper[5012]: I0219 05:43:58.735417 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bd6edb4-0376-458f-bb9d-f24e5e7ff47b" path="/var/lib/kubelet/pods/6bd6edb4-0376-458f-bb9d-f24e5e7ff47b/volumes" Feb 19 05:43:59 crc kubenswrapper[5012]: I0219 05:43:59.358705 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7f669f7d76-2qg4s" podStartSLOduration=5.358683401 podStartE2EDuration="5.358683401s" podCreationTimestamp="2026-02-19 05:43:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:43:58.635808086 +0000 UTC m=+1134.669130655" watchObservedRunningTime="2026-02-19 05:43:59.358683401 +0000 UTC m=+1135.392005970" Feb 19 05:43:59 crc kubenswrapper[5012]: I0219 05:43:59.363703 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 05:43:59 crc kubenswrapper[5012]: I0219 05:43:59.601444 5012 generic.go:334] "Generic (PLEG): container finished" podID="d4384807-a690-4e84-8b2f-d1f82a6e801b" containerID="21ac8b4f6fffa511d4235a3c327d3a8cd35ac9450d983816320d5195b11ee8bb" exitCode=0 Feb 19 05:43:59 crc kubenswrapper[5012]: I0219 05:43:59.601514 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6788477597-b25r4" event={"ID":"d4384807-a690-4e84-8b2f-d1f82a6e801b","Type":"ContainerDied","Data":"21ac8b4f6fffa511d4235a3c327d3a8cd35ac9450d983816320d5195b11ee8bb"} Feb 19 05:43:59 crc kubenswrapper[5012]: I0219 05:43:59.603057 5012 generic.go:334] "Generic (PLEG): container finished" podID="6c5e24dc-215e-4f19-8cf6-241bf57648f9" containerID="b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8" exitCode=0 Feb 19 05:43:59 crc kubenswrapper[5012]: I0219 05:43:59.603118 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" event={"ID":"6c5e24dc-215e-4f19-8cf6-241bf57648f9","Type":"ContainerDied","Data":"b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8"} Feb 19 05:43:59 crc kubenswrapper[5012]: W0219 05:43:59.761860 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b154229_6752_44d3_8b53_96147254af19.slice/crio-653027f305aefd23daa068d7977fcd142c7d791955ffc465b3adbe51a1e997a7 WatchSource:0}: Error finding container 653027f305aefd23daa068d7977fcd142c7d791955ffc465b3adbe51a1e997a7: Status 404 returned error can't find the container with id 653027f305aefd23daa068d7977fcd142c7d791955ffc465b3adbe51a1e997a7 Feb 19 05:43:59 crc kubenswrapper[5012]: I0219 05:43:59.971242 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:43:59.998725 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-config-data\") pod \"d5eb71f6-31df-418a-98dd-11668ff38825\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:43:59.998763 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d5eb71f6-31df-418a-98dd-11668ff38825-horizon-secret-key\") pod \"d5eb71f6-31df-418a-98dd-11668ff38825\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:43:59.998789 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5eb71f6-31df-418a-98dd-11668ff38825-logs\") pod \"d5eb71f6-31df-418a-98dd-11668ff38825\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:43:59.998895 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq85g\" (UniqueName: \"kubernetes.io/projected/d5eb71f6-31df-418a-98dd-11668ff38825-kube-api-access-sq85g\") pod \"d5eb71f6-31df-418a-98dd-11668ff38825\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:43:59.998943 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-scripts\") pod \"d5eb71f6-31df-418a-98dd-11668ff38825\" (UID: \"d5eb71f6-31df-418a-98dd-11668ff38825\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:43:59.999923 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5eb71f6-31df-418a-98dd-11668ff38825-logs" (OuterVolumeSpecName: "logs") pod "d5eb71f6-31df-418a-98dd-11668ff38825" (UID: "d5eb71f6-31df-418a-98dd-11668ff38825"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.004571 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5eb71f6-31df-418a-98dd-11668ff38825-kube-api-access-sq85g" (OuterVolumeSpecName: "kube-api-access-sq85g") pod "d5eb71f6-31df-418a-98dd-11668ff38825" (UID: "d5eb71f6-31df-418a-98dd-11668ff38825"). InnerVolumeSpecName "kube-api-access-sq85g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.006995 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5eb71f6-31df-418a-98dd-11668ff38825-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d5eb71f6-31df-418a-98dd-11668ff38825" (UID: "d5eb71f6-31df-418a-98dd-11668ff38825"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.034090 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-scripts" (OuterVolumeSpecName: "scripts") pod "d5eb71f6-31df-418a-98dd-11668ff38825" (UID: "d5eb71f6-31df-418a-98dd-11668ff38825"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.041490 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-config-data" (OuterVolumeSpecName: "config-data") pod "d5eb71f6-31df-418a-98dd-11668ff38825" (UID: "d5eb71f6-31df-418a-98dd-11668ff38825"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.101221 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.101583 5012 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d5eb71f6-31df-418a-98dd-11668ff38825-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.101595 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5eb71f6-31df-418a-98dd-11668ff38825-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.101608 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq85g\" (UniqueName: \"kubernetes.io/projected/d5eb71f6-31df-418a-98dd-11668ff38825-kube-api-access-sq85g\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.101618 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d5eb71f6-31df-418a-98dd-11668ff38825-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.339749 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.411118 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhgbb\" (UniqueName: \"kubernetes.io/projected/d4384807-a690-4e84-8b2f-d1f82a6e801b-kube-api-access-nhgbb\") pod \"d4384807-a690-4e84-8b2f-d1f82a6e801b\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.411222 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-nb\") pod \"d4384807-a690-4e84-8b2f-d1f82a6e801b\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.411314 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-swift-storage-0\") pod \"d4384807-a690-4e84-8b2f-d1f82a6e801b\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.412475 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-config\") pod \"d4384807-a690-4e84-8b2f-d1f82a6e801b\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.412517 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-sb\") pod \"d4384807-a690-4e84-8b2f-d1f82a6e801b\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.412578 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-svc\") pod \"d4384807-a690-4e84-8b2f-d1f82a6e801b\" (UID: \"d4384807-a690-4e84-8b2f-d1f82a6e801b\") " Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.416889 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4384807-a690-4e84-8b2f-d1f82a6e801b-kube-api-access-nhgbb" (OuterVolumeSpecName: "kube-api-access-nhgbb") pod "d4384807-a690-4e84-8b2f-d1f82a6e801b" (UID: "d4384807-a690-4e84-8b2f-d1f82a6e801b"). InnerVolumeSpecName "kube-api-access-nhgbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.465741 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.480029 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d4384807-a690-4e84-8b2f-d1f82a6e801b" (UID: "d4384807-a690-4e84-8b2f-d1f82a6e801b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.487757 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d4384807-a690-4e84-8b2f-d1f82a6e801b" (UID: "d4384807-a690-4e84-8b2f-d1f82a6e801b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.500586 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d4384807-a690-4e84-8b2f-d1f82a6e801b" (UID: "d4384807-a690-4e84-8b2f-d1f82a6e801b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.508763 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-config" (OuterVolumeSpecName: "config") pod "d4384807-a690-4e84-8b2f-d1f82a6e801b" (UID: "d4384807-a690-4e84-8b2f-d1f82a6e801b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.509892 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d4384807-a690-4e84-8b2f-d1f82a6e801b" (UID: "d4384807-a690-4e84-8b2f-d1f82a6e801b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.517934 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.517970 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhgbb\" (UniqueName: \"kubernetes.io/projected/d4384807-a690-4e84-8b2f-d1f82a6e801b-kube-api-access-nhgbb\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.517985 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.517998 5012 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.518009 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.518020 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4384807-a690-4e84-8b2f-d1f82a6e801b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.571979 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.631991 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236f420e-8855-41f8-8b25-813be7b28799","Type":"ContainerStarted","Data":"0b4212ecca9b60999638c1e6662994f4b7843d12f33587c1778eba71df434b72"} Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.636542 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c45b5647f-k799c" event={"ID":"d5eb71f6-31df-418a-98dd-11668ff38825","Type":"ContainerDied","Data":"86338dd7d36f9586a8f23b3288040adf41c4f986fc6d17aadaff0853e2749dd7"} Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.636579 5012 scope.go:117] "RemoveContainer" containerID="1740dd45d12f4fba32d28fe0edd137672168109214e3411aa79b0b01fe5420c4" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.636628 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c45b5647f-k799c" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.641637 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6788477597-b25r4" event={"ID":"d4384807-a690-4e84-8b2f-d1f82a6e801b","Type":"ContainerDied","Data":"6d270cd40f45bfefaf4186556b1652f3082a09963c33d3dc4823e1b2c33258e1"} Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.641692 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6788477597-b25r4" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.643707 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6b154229-6752-44d3-8b53-96147254af19","Type":"ContainerStarted","Data":"653027f305aefd23daa068d7977fcd142c7d791955ffc465b3adbe51a1e997a7"} Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.646353 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" event={"ID":"6c5e24dc-215e-4f19-8cf6-241bf57648f9","Type":"ContainerStarted","Data":"93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38"} Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.647053 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.669540 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" podStartSLOduration=4.669523342 podStartE2EDuration="4.669523342s" podCreationTimestamp="2026-02-19 05:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:00.669392889 +0000 UTC m=+1136.702715468" watchObservedRunningTime="2026-02-19 05:44:00.669523342 +0000 UTC m=+1136.702845911" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.740427 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c45b5647f-k799c"] Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.750525 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5c45b5647f-k799c"] Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.770959 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6788477597-b25r4"] Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.781331 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6788477597-b25r4"] Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.857095 5012 scope.go:117] "RemoveContainer" containerID="0edf70792244ac07bbfc8312a7939b51e2c1f6efdd9a9026a76bb21f0665c246" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.932211 5012 scope.go:117] "RemoveContainer" containerID="21ac8b4f6fffa511d4235a3c327d3a8cd35ac9450d983816320d5195b11ee8bb" Feb 19 05:44:00 crc kubenswrapper[5012]: I0219 05:44:00.953883 5012 scope.go:117] "RemoveContainer" containerID="d5440c73e1cd6d63cff9dfa2d45367d2d8fced5e8574ebcc35f43099ef7046cb" Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.667186 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6b154229-6752-44d3-8b53-96147254af19","Type":"ContainerStarted","Data":"9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a"} Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.668436 5012 generic.go:334] "Generic (PLEG): container finished" podID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerID="ba936c2a2295accf188d98dabc618f0a4eb4fcc0b863a622cffddbfebb246fc3" exitCode=1 Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.668692 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7fdaa495-6cde-409a-871a-e334ca3f2a91","Type":"ContainerDied","Data":"ba936c2a2295accf188d98dabc618f0a4eb4fcc0b863a622cffddbfebb246fc3"} Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.669409 5012 scope.go:117] "RemoveContainer" containerID="ba936c2a2295accf188d98dabc618f0a4eb4fcc0b863a622cffddbfebb246fc3" Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.674629 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236f420e-8855-41f8-8b25-813be7b28799","Type":"ContainerStarted","Data":"90ba300b50323aa9b522179eb4980608476a719c46e6c6ece43f44fc2dbdc9ad"} Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.679923 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a9c1c12b-f055-417b-9300-706f98b0f8cc","Type":"ContainerStarted","Data":"ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b"} Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.787433 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.793740 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.808261 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.808318 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.821379 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.821989 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.871436 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Feb 19 05:44:01 crc kubenswrapper[5012]: I0219 05:44:01.882226 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75cc7d9585-x8r8l" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.157:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.157:8080: connect: connection refused" Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.130774 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.696157 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6b154229-6752-44d3-8b53-96147254af19","Type":"ContainerStarted","Data":"4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d"} Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.696701 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6b154229-6752-44d3-8b53-96147254af19" containerName="cinder-api-log" containerID="cri-o://9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a" gracePeriod=30 Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.697054 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.697438 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6b154229-6752-44d3-8b53-96147254af19" containerName="cinder-api" containerID="cri-o://4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d" gracePeriod=30 Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.720576 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.719249221 podStartE2EDuration="5.719249221s" podCreationTimestamp="2026-02-19 05:43:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:02.712915172 +0000 UTC m=+1138.746237781" watchObservedRunningTime="2026-02-19 05:44:02.719249221 +0000 UTC m=+1138.752571830" Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.746077 5012 generic.go:334] "Generic (PLEG): container finished" podID="787f8a71-dee4-40d2-b33b-85bcfc58f921" containerID="c3b30cfc4d7788c5bf2800aec00271d7a398ee5903276843825107c74fa7f5b9" exitCode=0 Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.755672 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4384807-a690-4e84-8b2f-d1f82a6e801b" path="/var/lib/kubelet/pods/d4384807-a690-4e84-8b2f-d1f82a6e801b/volumes" Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.757450 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5eb71f6-31df-418a-98dd-11668ff38825" path="/var/lib/kubelet/pods/d5eb71f6-31df-418a-98dd-11668ff38825/volumes" Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.759066 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7fdaa495-6cde-409a-871a-e334ca3f2a91","Type":"ContainerStarted","Data":"4812a8f6df189761983e7fbdb500126b62d33c0b69d53f9becfbce526c3f3865"} Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.759112 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236f420e-8855-41f8-8b25-813be7b28799","Type":"ContainerStarted","Data":"01c17cd2fd8d4c7f25652d74baa178f4238cfbbc1ba02a9f9c5c2148a344aa2a"} Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.759151 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236f420e-8855-41f8-8b25-813be7b28799","Type":"ContainerStarted","Data":"7d42600135c89d15a2ed647cd5fc2d79a4290622986701fbe5330b3c8214cc54"} Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.759166 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-px7xk" event={"ID":"787f8a71-dee4-40d2-b33b-85bcfc58f921","Type":"ContainerDied","Data":"c3b30cfc4d7788c5bf2800aec00271d7a398ee5903276843825107c74fa7f5b9"} Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.759184 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a9c1c12b-f055-417b-9300-706f98b0f8cc","Type":"ContainerStarted","Data":"bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0"} Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.779921 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.780498 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.635617111 podStartE2EDuration="6.78048026s" podCreationTimestamp="2026-02-19 05:43:56 +0000 UTC" firstStartedPulling="2026-02-19 05:43:57.808039655 +0000 UTC m=+1133.841362224" lastFinishedPulling="2026-02-19 05:43:59.952902804 +0000 UTC m=+1135.986225373" observedRunningTime="2026-02-19 05:44:02.780135641 +0000 UTC m=+1138.813458220" watchObservedRunningTime="2026-02-19 05:44:02.78048026 +0000 UTC m=+1138.813802829" Feb 19 05:44:02 crc kubenswrapper[5012]: I0219 05:44:02.808010 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.683617 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.779626 5012 generic.go:334] "Generic (PLEG): container finished" podID="6b154229-6752-44d3-8b53-96147254af19" containerID="4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d" exitCode=0 Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.780170 5012 generic.go:334] "Generic (PLEG): container finished" podID="6b154229-6752-44d3-8b53-96147254af19" containerID="9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a" exitCode=143 Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.780232 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6b154229-6752-44d3-8b53-96147254af19","Type":"ContainerDied","Data":"4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d"} Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.780275 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6b154229-6752-44d3-8b53-96147254af19","Type":"ContainerDied","Data":"9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a"} Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.780288 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6b154229-6752-44d3-8b53-96147254af19","Type":"ContainerDied","Data":"653027f305aefd23daa068d7977fcd142c7d791955ffc465b3adbe51a1e997a7"} Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.780325 5012 scope.go:117] "RemoveContainer" containerID="4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.780507 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.795192 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxflh\" (UniqueName: \"kubernetes.io/projected/6b154229-6752-44d3-8b53-96147254af19-kube-api-access-rxflh\") pod \"6b154229-6752-44d3-8b53-96147254af19\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.795233 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data-custom\") pod \"6b154229-6752-44d3-8b53-96147254af19\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.795377 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data\") pod \"6b154229-6752-44d3-8b53-96147254af19\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.795398 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6b154229-6752-44d3-8b53-96147254af19-etc-machine-id\") pod \"6b154229-6752-44d3-8b53-96147254af19\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.795436 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b154229-6752-44d3-8b53-96147254af19-logs\") pod \"6b154229-6752-44d3-8b53-96147254af19\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.795511 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-scripts\") pod \"6b154229-6752-44d3-8b53-96147254af19\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.795607 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-combined-ca-bundle\") pod \"6b154229-6752-44d3-8b53-96147254af19\" (UID: \"6b154229-6752-44d3-8b53-96147254af19\") " Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.796716 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b154229-6752-44d3-8b53-96147254af19-logs" (OuterVolumeSpecName: "logs") pod "6b154229-6752-44d3-8b53-96147254af19" (UID: "6b154229-6752-44d3-8b53-96147254af19"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.803294 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b154229-6752-44d3-8b53-96147254af19-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6b154229-6752-44d3-8b53-96147254af19" (UID: "6b154229-6752-44d3-8b53-96147254af19"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.807225 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236f420e-8855-41f8-8b25-813be7b28799","Type":"ContainerStarted","Data":"6762263a345e4365421a46f2f13896eee2b40581b23287e4ae263f9733a40058"} Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.807365 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.809128 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-scripts" (OuterVolumeSpecName: "scripts") pod "6b154229-6752-44d3-8b53-96147254af19" (UID: "6b154229-6752-44d3-8b53-96147254af19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.810575 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6b154229-6752-44d3-8b53-96147254af19" (UID: "6b154229-6752-44d3-8b53-96147254af19"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.823383 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b154229-6752-44d3-8b53-96147254af19-kube-api-access-rxflh" (OuterVolumeSpecName: "kube-api-access-rxflh") pod "6b154229-6752-44d3-8b53-96147254af19" (UID: "6b154229-6752-44d3-8b53-96147254af19"). InnerVolumeSpecName "kube-api-access-rxflh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.845492 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.826426414 podStartE2EDuration="6.845475163s" podCreationTimestamp="2026-02-19 05:43:57 +0000 UTC" firstStartedPulling="2026-02-19 05:44:00.487619311 +0000 UTC m=+1136.520941880" lastFinishedPulling="2026-02-19 05:44:03.50666806 +0000 UTC m=+1139.539990629" observedRunningTime="2026-02-19 05:44:03.84209477 +0000 UTC m=+1139.875417339" watchObservedRunningTime="2026-02-19 05:44:03.845475163 +0000 UTC m=+1139.878797732" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.899609 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxflh\" (UniqueName: \"kubernetes.io/projected/6b154229-6752-44d3-8b53-96147254af19-kube-api-access-rxflh\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.899636 5012 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.899645 5012 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6b154229-6752-44d3-8b53-96147254af19-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.899655 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b154229-6752-44d3-8b53-96147254af19-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.899663 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.910908 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data" (OuterVolumeSpecName: "config-data") pod "6b154229-6752-44d3-8b53-96147254af19" (UID: "6b154229-6752-44d3-8b53-96147254af19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:03 crc kubenswrapper[5012]: I0219 05:44:03.967452 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b154229-6752-44d3-8b53-96147254af19" (UID: "6b154229-6752-44d3-8b53-96147254af19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.004415 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.004447 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b154229-6752-44d3-8b53-96147254af19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.056266 5012 scope.go:117] "RemoveContainer" containerID="9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.100353 5012 scope.go:117] "RemoveContainer" containerID="4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d" Feb 19 05:44:04 crc kubenswrapper[5012]: E0219 05:44:04.100973 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d\": container with ID starting with 4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d not found: ID does not exist" containerID="4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.101064 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d"} err="failed to get container status \"4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d\": rpc error: code = NotFound desc = could not find container \"4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d\": container with ID starting with 4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d not found: ID does not exist" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.101124 5012 scope.go:117] "RemoveContainer" containerID="9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a" Feb 19 05:44:04 crc kubenswrapper[5012]: E0219 05:44:04.121700 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a\": container with ID starting with 9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a not found: ID does not exist" containerID="9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.121836 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a"} err="failed to get container status \"9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a\": rpc error: code = NotFound desc = could not find container \"9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a\": container with ID starting with 9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a not found: ID does not exist" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.121869 5012 scope.go:117] "RemoveContainer" containerID="4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.126245 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d"} err="failed to get container status \"4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d\": rpc error: code = NotFound desc = could not find container \"4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d\": container with ID starting with 4831e08799399322a35acd6cd6689186986d4060b9299adc6fd1991872e4ae8d not found: ID does not exist" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.126294 5012 scope.go:117] "RemoveContainer" containerID="9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.127728 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a"} err="failed to get container status \"9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a\": rpc error: code = NotFound desc = could not find container \"9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a\": container with ID starting with 9f402d485836b2f6781001982bc37836b7dfb352d759d48d9b421c938176f83a not found: ID does not exist" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.168820 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.183180 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.189848 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 05:44:04 crc kubenswrapper[5012]: E0219 05:44:04.190357 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5eb71f6-31df-418a-98dd-11668ff38825" containerName="horizon" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.190460 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5eb71f6-31df-418a-98dd-11668ff38825" containerName="horizon" Feb 19 05:44:04 crc kubenswrapper[5012]: E0219 05:44:04.190524 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5eb71f6-31df-418a-98dd-11668ff38825" containerName="horizon-log" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.190594 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5eb71f6-31df-418a-98dd-11668ff38825" containerName="horizon-log" Feb 19 05:44:04 crc kubenswrapper[5012]: E0219 05:44:04.190659 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4384807-a690-4e84-8b2f-d1f82a6e801b" containerName="init" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.190713 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4384807-a690-4e84-8b2f-d1f82a6e801b" containerName="init" Feb 19 05:44:04 crc kubenswrapper[5012]: E0219 05:44:04.191179 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4384807-a690-4e84-8b2f-d1f82a6e801b" containerName="dnsmasq-dns" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.191256 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4384807-a690-4e84-8b2f-d1f82a6e801b" containerName="dnsmasq-dns" Feb 19 05:44:04 crc kubenswrapper[5012]: E0219 05:44:04.191366 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b154229-6752-44d3-8b53-96147254af19" containerName="cinder-api-log" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.191443 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b154229-6752-44d3-8b53-96147254af19" containerName="cinder-api-log" Feb 19 05:44:04 crc kubenswrapper[5012]: E0219 05:44:04.191516 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b154229-6752-44d3-8b53-96147254af19" containerName="cinder-api" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.191576 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b154229-6752-44d3-8b53-96147254af19" containerName="cinder-api" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.192266 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4384807-a690-4e84-8b2f-d1f82a6e801b" containerName="dnsmasq-dns" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.192375 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b154229-6752-44d3-8b53-96147254af19" containerName="cinder-api-log" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.192473 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b154229-6752-44d3-8b53-96147254af19" containerName="cinder-api" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.192542 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5eb71f6-31df-418a-98dd-11668ff38825" containerName="horizon" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.192604 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5eb71f6-31df-418a-98dd-11668ff38825" containerName="horizon-log" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.193814 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.196011 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.198119 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.198383 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.211427 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.230622 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-px7xk" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.321399 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.321767 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.321877 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-config-data-custom\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.321954 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.322025 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c548edc-6755-4310-9b8d-780a384ec6bd-logs\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.322098 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4c548edc-6755-4310-9b8d-780a384ec6bd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.322189 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5spvl\" (UniqueName: \"kubernetes.io/projected/4c548edc-6755-4310-9b8d-780a384ec6bd-kube-api-access-5spvl\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.322264 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-config-data\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.322391 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-scripts\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.426458 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-config\") pod \"787f8a71-dee4-40d2-b33b-85bcfc58f921\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.426997 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s87kc\" (UniqueName: \"kubernetes.io/projected/787f8a71-dee4-40d2-b33b-85bcfc58f921-kube-api-access-s87kc\") pod \"787f8a71-dee4-40d2-b33b-85bcfc58f921\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.427142 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-combined-ca-bundle\") pod \"787f8a71-dee4-40d2-b33b-85bcfc58f921\" (UID: \"787f8a71-dee4-40d2-b33b-85bcfc58f921\") " Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.427503 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5spvl\" (UniqueName: \"kubernetes.io/projected/4c548edc-6755-4310-9b8d-780a384ec6bd-kube-api-access-5spvl\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.429329 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-config-data\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.429481 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-scripts\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.429976 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.430114 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.430209 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-config-data-custom\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.430999 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.446776 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c548edc-6755-4310-9b8d-780a384ec6bd-logs\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.438822 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-public-tls-certs\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.446653 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-config-data\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.447194 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4c548edc-6755-4310-9b8d-780a384ec6bd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.447270 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c548edc-6755-4310-9b8d-780a384ec6bd-logs\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.431691 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/787f8a71-dee4-40d2-b33b-85bcfc58f921-kube-api-access-s87kc" (OuterVolumeSpecName: "kube-api-access-s87kc") pod "787f8a71-dee4-40d2-b33b-85bcfc58f921" (UID: "787f8a71-dee4-40d2-b33b-85bcfc58f921"). InnerVolumeSpecName "kube-api-access-s87kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.447561 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4c548edc-6755-4310-9b8d-780a384ec6bd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.447722 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.452941 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.454959 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-config-data-custom\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.454998 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c548edc-6755-4310-9b8d-780a384ec6bd-scripts\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.456017 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5spvl\" (UniqueName: \"kubernetes.io/projected/4c548edc-6755-4310-9b8d-780a384ec6bd-kube-api-access-5spvl\") pod \"cinder-api-0\" (UID: \"4c548edc-6755-4310-9b8d-780a384ec6bd\") " pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.483007 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "787f8a71-dee4-40d2-b33b-85bcfc58f921" (UID: "787f8a71-dee4-40d2-b33b-85bcfc58f921"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.483521 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-config" (OuterVolumeSpecName: "config") pod "787f8a71-dee4-40d2-b33b-85bcfc58f921" (UID: "787f8a71-dee4-40d2-b33b-85bcfc58f921"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.535595 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.549921 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.549974 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s87kc\" (UniqueName: \"kubernetes.io/projected/787f8a71-dee4-40d2-b33b-85bcfc58f921-kube-api-access-s87kc\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.549990 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/787f8a71-dee4-40d2-b33b-85bcfc58f921-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.758221 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b154229-6752-44d3-8b53-96147254af19" path="/var/lib/kubelet/pods/6b154229-6752-44d3-8b53-96147254af19/volumes" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.862995 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-px7xk" Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.863636 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-px7xk" event={"ID":"787f8a71-dee4-40d2-b33b-85bcfc58f921","Type":"ContainerDied","Data":"22e7c478cf5c3572f072dadd10797eb555b9b7702664f2f3d3e6b1d4af431e39"} Feb 19 05:44:04 crc kubenswrapper[5012]: I0219 05:44:04.863715 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22e7c478cf5c3572f072dadd10797eb555b9b7702664f2f3d3e6b1d4af431e39" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.024640 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757c7596dc-4ccqz"] Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.024929 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" podUID="6c5e24dc-215e-4f19-8cf6-241bf57648f9" containerName="dnsmasq-dns" containerID="cri-o://93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38" gracePeriod=10 Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.031201 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.033199 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.060657 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl"] Feb 19 05:44:05 crc kubenswrapper[5012]: E0219 05:44:05.089982 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787f8a71-dee4-40d2-b33b-85bcfc58f921" containerName="neutron-db-sync" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.090145 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="787f8a71-dee4-40d2-b33b-85bcfc58f921" containerName="neutron-db-sync" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.090611 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="787f8a71-dee4-40d2-b33b-85bcfc58f921" containerName="neutron-db-sync" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.103635 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.105544 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl"] Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.275885 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-nb\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.276528 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-svc\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.276640 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-config\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.276722 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-swift-storage-0\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.276934 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-sb\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.277109 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztbnr\" (UniqueName: \"kubernetes.io/projected/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-kube-api-access-ztbnr\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.286851 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-77b847d784-sfqqm"] Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.295498 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.298161 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.299218 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.299556 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rtrj8" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.299680 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.325499 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77b847d784-sfqqm"] Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378559 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-svc\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378631 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-config\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378653 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-swift-storage-0\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378725 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm6t4\" (UniqueName: \"kubernetes.io/projected/20fc844f-415a-4c39-b2ac-966ff2a43a43-kube-api-access-cm6t4\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378760 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-ovndb-tls-certs\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378777 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-httpd-config\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378817 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-config\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378859 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-sb\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378889 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztbnr\" (UniqueName: \"kubernetes.io/projected/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-kube-api-access-ztbnr\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378915 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-nb\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.378934 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-combined-ca-bundle\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.379757 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-svc\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.380053 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-config\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.380421 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-sb\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.380867 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-nb\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.381067 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-swift-storage-0\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.433720 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztbnr\" (UniqueName: \"kubernetes.io/projected/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-kube-api-access-ztbnr\") pod \"dnsmasq-dns-6c7cb6dcdc-qjtxl\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.482574 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-ovndb-tls-certs\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.482615 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-httpd-config\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.482673 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-config\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.482728 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-combined-ca-bundle\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.482818 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm6t4\" (UniqueName: \"kubernetes.io/projected/20fc844f-415a-4c39-b2ac-966ff2a43a43-kube-api-access-cm6t4\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.495067 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-combined-ca-bundle\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.495085 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-ovndb-tls-certs\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.495827 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-config\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.498939 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-httpd-config\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.503629 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm6t4\" (UniqueName: \"kubernetes.io/projected/20fc844f-415a-4c39-b2ac-966ff2a43a43-kube-api-access-cm6t4\") pod \"neutron-77b847d784-sfqqm\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.539494 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.652590 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.851509 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.898959 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-nb\") pod \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.899067 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wg28\" (UniqueName: \"kubernetes.io/projected/6c5e24dc-215e-4f19-8cf6-241bf57648f9-kube-api-access-9wg28\") pod \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.899117 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-swift-storage-0\") pod \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.899177 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-svc\") pod \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.899221 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-sb\") pod \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.899321 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-config\") pod \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\" (UID: \"6c5e24dc-215e-4f19-8cf6-241bf57648f9\") " Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.927582 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c5e24dc-215e-4f19-8cf6-241bf57648f9-kube-api-access-9wg28" (OuterVolumeSpecName: "kube-api-access-9wg28") pod "6c5e24dc-215e-4f19-8cf6-241bf57648f9" (UID: "6c5e24dc-215e-4f19-8cf6-241bf57648f9"). InnerVolumeSpecName "kube-api-access-9wg28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.957461 5012 generic.go:334] "Generic (PLEG): container finished" podID="6c5e24dc-215e-4f19-8cf6-241bf57648f9" containerID="93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38" exitCode=0 Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.957528 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" event={"ID":"6c5e24dc-215e-4f19-8cf6-241bf57648f9","Type":"ContainerDied","Data":"93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38"} Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.957574 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" event={"ID":"6c5e24dc-215e-4f19-8cf6-241bf57648f9","Type":"ContainerDied","Data":"ead40496902b159e9bebd9ba1a479551b8997a76aa96d1285d684eafe66d05a5"} Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.957592 5012 scope.go:117] "RemoveContainer" containerID="93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.957708 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757c7596dc-4ccqz" Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.982979 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl"] Feb 19 05:44:05 crc kubenswrapper[5012]: I0219 05:44:05.995551 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4c548edc-6755-4310-9b8d-780a384ec6bd","Type":"ContainerStarted","Data":"81b8a8622fe11df4ede94ace220c18a385b1e8288789ef6c75d156fafc627131"} Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.011623 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wg28\" (UniqueName: \"kubernetes.io/projected/6c5e24dc-215e-4f19-8cf6-241bf57648f9-kube-api-access-9wg28\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.026059 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6c5e24dc-215e-4f19-8cf6-241bf57648f9" (UID: "6c5e24dc-215e-4f19-8cf6-241bf57648f9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.044241 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-config" (OuterVolumeSpecName: "config") pod "6c5e24dc-215e-4f19-8cf6-241bf57648f9" (UID: "6c5e24dc-215e-4f19-8cf6-241bf57648f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.045023 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6c5e24dc-215e-4f19-8cf6-241bf57648f9" (UID: "6c5e24dc-215e-4f19-8cf6-241bf57648f9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.101337 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c5e24dc-215e-4f19-8cf6-241bf57648f9" (UID: "6c5e24dc-215e-4f19-8cf6-241bf57648f9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.121665 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.121708 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.121720 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.121730 5012 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.191948 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6c5e24dc-215e-4f19-8cf6-241bf57648f9" (UID: "6c5e24dc-215e-4f19-8cf6-241bf57648f9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.208995 5012 scope.go:117] "RemoveContainer" containerID="b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.224856 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c5e24dc-215e-4f19-8cf6-241bf57648f9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.231650 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77b847d784-sfqqm"] Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.275401 5012 scope.go:117] "RemoveContainer" containerID="93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38" Feb 19 05:44:06 crc kubenswrapper[5012]: E0219 05:44:06.276796 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38\": container with ID starting with 93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38 not found: ID does not exist" containerID="93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.278087 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38"} err="failed to get container status \"93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38\": rpc error: code = NotFound desc = could not find container \"93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38\": container with ID starting with 93362f0920b1fc5bd0b07dc87e913124d7b84e04ca85f3646618c0d901b3bf38 not found: ID does not exist" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.278180 5012 scope.go:117] "RemoveContainer" containerID="b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8" Feb 19 05:44:06 crc kubenswrapper[5012]: E0219 05:44:06.279415 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8\": container with ID starting with b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8 not found: ID does not exist" containerID="b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.279755 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8"} err="failed to get container status \"b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8\": rpc error: code = NotFound desc = could not find container \"b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8\": container with ID starting with b717275ff9db947bdce668bf1437e2867662bead558d637c02bb8ecfaf5a96e8 not found: ID does not exist" Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.300352 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757c7596dc-4ccqz"] Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.309642 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757c7596dc-4ccqz"] Feb 19 05:44:06 crc kubenswrapper[5012]: I0219 05:44:06.719014 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c5e24dc-215e-4f19-8cf6-241bf57648f9" path="/var/lib/kubelet/pods/6c5e24dc-215e-4f19-8cf6-241bf57648f9/volumes" Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.003618 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4c548edc-6755-4310-9b8d-780a384ec6bd","Type":"ContainerStarted","Data":"891d0dc26068ff29bd823164af1de54e6f4a7ac97540d918ded71559f4c5a68e"} Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.005862 5012 generic.go:334] "Generic (PLEG): container finished" podID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerID="4812a8f6df189761983e7fbdb500126b62d33c0b69d53f9becfbce526c3f3865" exitCode=1 Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.005917 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7fdaa495-6cde-409a-871a-e334ca3f2a91","Type":"ContainerDied","Data":"4812a8f6df189761983e7fbdb500126b62d33c0b69d53f9becfbce526c3f3865"} Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.005945 5012 scope.go:117] "RemoveContainer" containerID="ba936c2a2295accf188d98dabc618f0a4eb4fcc0b863a622cffddbfebb246fc3" Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.007278 5012 scope.go:117] "RemoveContainer" containerID="4812a8f6df189761983e7fbdb500126b62d33c0b69d53f9becfbce526c3f3865" Feb 19 05:44:07 crc kubenswrapper[5012]: E0219 05:44:07.008013 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(7fdaa495-6cde-409a-871a-e334ca3f2a91)\"" pod="openstack/watcher-decision-engine-0" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.040343 5012 generic.go:334] "Generic (PLEG): container finished" podID="9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" containerID="ca6a3289326a3d74df11835a9c2f296bc10d31bbffc5d5c69c448a3f93f521ea" exitCode=0 Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.040433 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" event={"ID":"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e","Type":"ContainerDied","Data":"ca6a3289326a3d74df11835a9c2f296bc10d31bbffc5d5c69c448a3f93f521ea"} Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.040468 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" event={"ID":"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e","Type":"ContainerStarted","Data":"5892f4877b405b9244dd43361effc1a470655536dbd633845dd04bd643dbfba5"} Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.046032 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77b847d784-sfqqm" event={"ID":"20fc844f-415a-4c39-b2ac-966ff2a43a43","Type":"ContainerStarted","Data":"6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f"} Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.046113 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77b847d784-sfqqm" event={"ID":"20fc844f-415a-4c39-b2ac-966ff2a43a43","Type":"ContainerStarted","Data":"9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a"} Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.046125 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77b847d784-sfqqm" event={"ID":"20fc844f-415a-4c39-b2ac-966ff2a43a43","Type":"ContainerStarted","Data":"05d6404e6cfe0f5924141acac1a5c449939eddf44dc7eb77958158988b1bb5ee"} Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.046196 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.125830 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-77b847d784-sfqqm" podStartSLOduration=2.125808324 podStartE2EDuration="2.125808324s" podCreationTimestamp="2026-02-19 05:44:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:07.079571042 +0000 UTC m=+1143.112893611" watchObservedRunningTime="2026-02-19 05:44:07.125808324 +0000 UTC m=+1143.159130893" Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.150730 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.303876 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.307557 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.346080 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f669f7d76-2qg4s" Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.419749 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-778557f86b-hp4xf"] Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.420348 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-778557f86b-hp4xf" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerName="barbican-api-log" containerID="cri-o://fe85e93188d20a0757f4ff89e6ad6e7cd4a5a7fc9569c748b0fe68bce7f50e89" gracePeriod=30 Feb 19 05:44:07 crc kubenswrapper[5012]: I0219 05:44:07.420726 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-778557f86b-hp4xf" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerName="barbican-api" containerID="cri-o://0a1428fe2110ceec4a472e101ab178eb05366af10098eb515e5229853c308ba9" gracePeriod=30 Feb 19 05:44:08 crc kubenswrapper[5012]: I0219 05:44:08.060426 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" event={"ID":"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e","Type":"ContainerStarted","Data":"0665dea2b78f255d6fbccb798f4cfaab479a2e00f62ee271920f433e530bc5cb"} Feb 19 05:44:08 crc kubenswrapper[5012]: I0219 05:44:08.060672 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:08 crc kubenswrapper[5012]: I0219 05:44:08.062600 5012 generic.go:334] "Generic (PLEG): container finished" podID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerID="fe85e93188d20a0757f4ff89e6ad6e7cd4a5a7fc9569c748b0fe68bce7f50e89" exitCode=143 Feb 19 05:44:08 crc kubenswrapper[5012]: I0219 05:44:08.062656 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-778557f86b-hp4xf" event={"ID":"6dfce017-0fe6-4613-910b-2c0f88af8bb2","Type":"ContainerDied","Data":"fe85e93188d20a0757f4ff89e6ad6e7cd4a5a7fc9569c748b0fe68bce7f50e89"} Feb 19 05:44:08 crc kubenswrapper[5012]: I0219 05:44:08.064342 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4c548edc-6755-4310-9b8d-780a384ec6bd","Type":"ContainerStarted","Data":"1d296e7a39eecf01a7bb085c9cc72bacf3f971a8d9a82128da5ff4ae87652e7e"} Feb 19 05:44:08 crc kubenswrapper[5012]: I0219 05:44:08.064471 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 05:44:08 crc kubenswrapper[5012]: I0219 05:44:08.088577 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" podStartSLOduration=3.088561229 podStartE2EDuration="3.088561229s" podCreationTimestamp="2026-02-19 05:44:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:08.079004815 +0000 UTC m=+1144.112327384" watchObservedRunningTime="2026-02-19 05:44:08.088561229 +0000 UTC m=+1144.121883788" Feb 19 05:44:08 crc kubenswrapper[5012]: I0219 05:44:08.140495 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.14047559 podStartE2EDuration="4.14047559s" podCreationTimestamp="2026-02-19 05:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:08.133330495 +0000 UTC m=+1144.166653084" watchObservedRunningTime="2026-02-19 05:44:08.14047559 +0000 UTC m=+1144.173798159" Feb 19 05:44:08 crc kubenswrapper[5012]: I0219 05:44:08.172599 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.075282 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a9c1c12b-f055-417b-9300-706f98b0f8cc" containerName="probe" containerID="cri-o://bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0" gracePeriod=30 Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.075951 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a9c1c12b-f055-417b-9300-706f98b0f8cc" containerName="cinder-scheduler" containerID="cri-o://ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b" gracePeriod=30 Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.620462 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5ff88b6c7c-5bg66"] Feb 19 05:44:09 crc kubenswrapper[5012]: E0219 05:44:09.621049 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c5e24dc-215e-4f19-8cf6-241bf57648f9" containerName="init" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.621070 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c5e24dc-215e-4f19-8cf6-241bf57648f9" containerName="init" Feb 19 05:44:09 crc kubenswrapper[5012]: E0219 05:44:09.621080 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c5e24dc-215e-4f19-8cf6-241bf57648f9" containerName="dnsmasq-dns" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.621090 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c5e24dc-215e-4f19-8cf6-241bf57648f9" containerName="dnsmasq-dns" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.621341 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c5e24dc-215e-4f19-8cf6-241bf57648f9" containerName="dnsmasq-dns" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.622686 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.624952 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.625204 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.650849 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5ff88b6c7c-5bg66"] Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.718652 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-config\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.718739 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-ovndb-tls-certs\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.718765 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-httpd-config\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.718787 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-internal-tls-certs\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.718916 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-combined-ca-bundle\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.718970 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-public-tls-certs\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.718991 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8mbg\" (UniqueName: \"kubernetes.io/projected/eb805277-3dfc-4810-9845-3ba928d262c2-kube-api-access-m8mbg\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.820682 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-combined-ca-bundle\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.820776 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-public-tls-certs\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.820805 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8mbg\" (UniqueName: \"kubernetes.io/projected/eb805277-3dfc-4810-9845-3ba928d262c2-kube-api-access-m8mbg\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.820849 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-config\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.820903 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-ovndb-tls-certs\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.820927 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-httpd-config\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.820949 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-internal-tls-certs\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.831142 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-httpd-config\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.831167 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-combined-ca-bundle\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.832083 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-public-tls-certs\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.834165 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-internal-tls-certs\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.834958 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-config\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.835556 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb805277-3dfc-4810-9845-3ba928d262c2-ovndb-tls-certs\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.843929 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8mbg\" (UniqueName: \"kubernetes.io/projected/eb805277-3dfc-4810-9845-3ba928d262c2-kube-api-access-m8mbg\") pod \"neutron-5ff88b6c7c-5bg66\" (UID: \"eb805277-3dfc-4810-9845-3ba928d262c2\") " pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.932438 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.944369 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.955151 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.960177 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:44:09 crc kubenswrapper[5012]: I0219 05:44:09.961998 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f94997dd8-cvnfv" Feb 19 05:44:10 crc kubenswrapper[5012]: I0219 05:44:10.121918 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5c6b5c5b7b-9nnqj"] Feb 19 05:44:10 crc kubenswrapper[5012]: I0219 05:44:10.151518 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 19 05:44:10 crc kubenswrapper[5012]: I0219 05:44:10.151964 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerName="watcher-api" containerID="cri-o://4c7e7897254d29f17ce8fe214986663a24ab7ca2a73051f5e809d6f1daf31a29" gracePeriod=30 Feb 19 05:44:10 crc kubenswrapper[5012]: I0219 05:44:10.151915 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerName="watcher-api-log" containerID="cri-o://65be4651ae750a28ef010be6e5423125eee000964a57e57affa6249b22b2eb91" gracePeriod=30 Feb 19 05:44:10 crc kubenswrapper[5012]: I0219 05:44:10.168636 5012 generic.go:334] "Generic (PLEG): container finished" podID="a9c1c12b-f055-417b-9300-706f98b0f8cc" containerID="bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0" exitCode=0 Feb 19 05:44:10 crc kubenswrapper[5012]: I0219 05:44:10.169146 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a9c1c12b-f055-417b-9300-706f98b0f8cc","Type":"ContainerDied","Data":"bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0"} Feb 19 05:44:10 crc kubenswrapper[5012]: I0219 05:44:10.808315 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-778557f86b-hp4xf" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.175:9311/healthcheck\": read tcp 10.217.0.2:60328->10.217.0.175:9311: read: connection reset by peer" Feb 19 05:44:10 crc kubenswrapper[5012]: I0219 05:44:10.809384 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-778557f86b-hp4xf" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.175:9311/healthcheck\": read tcp 10.217.0.2:60326->10.217.0.175:9311: read: connection reset by peer" Feb 19 05:44:10 crc kubenswrapper[5012]: I0219 05:44:10.856036 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5ff88b6c7c-5bg66"] Feb 19 05:44:10 crc kubenswrapper[5012]: W0219 05:44:10.880149 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb805277_3dfc_4810_9845_3ba928d262c2.slice/crio-f2b086a9c4d22f99f2517d3e9c47f0f31042cc048d8980b321e79638eb715ed4 WatchSource:0}: Error finding container f2b086a9c4d22f99f2517d3e9c47f0f31042cc048d8980b321e79638eb715ed4: Status 404 returned error can't find the container with id f2b086a9c4d22f99f2517d3e9c47f0f31042cc048d8980b321e79638eb715ed4 Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.195941 5012 generic.go:334] "Generic (PLEG): container finished" podID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerID="0a1428fe2110ceec4a472e101ab178eb05366af10098eb515e5229853c308ba9" exitCode=0 Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.196333 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-778557f86b-hp4xf" event={"ID":"6dfce017-0fe6-4613-910b-2c0f88af8bb2","Type":"ContainerDied","Data":"0a1428fe2110ceec4a472e101ab178eb05366af10098eb515e5229853c308ba9"} Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.202246 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ff88b6c7c-5bg66" event={"ID":"eb805277-3dfc-4810-9845-3ba928d262c2","Type":"ContainerStarted","Data":"f2b086a9c4d22f99f2517d3e9c47f0f31042cc048d8980b321e79638eb715ed4"} Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.212697 5012 generic.go:334] "Generic (PLEG): container finished" podID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerID="65be4651ae750a28ef010be6e5423125eee000964a57e57affa6249b22b2eb91" exitCode=143 Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.212734 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"17c5eb4a-b8b3-4178-b5a0-2a37211266e6","Type":"ContainerDied","Data":"65be4651ae750a28ef010be6e5423125eee000964a57e57affa6249b22b2eb91"} Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.212947 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5c6b5c5b7b-9nnqj" podUID="d214ce94-6c65-4641-a1e2-21f5f920ecec" containerName="placement-log" containerID="cri-o://f9417f3089ab939acabaf087bdedc14bb6991a7978946e02fec09196a1d9ec1c" gracePeriod=30 Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.212995 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5c6b5c5b7b-9nnqj" podUID="d214ce94-6c65-4641-a1e2-21f5f920ecec" containerName="placement-api" containerID="cri-o://1bf5d73af424c2f421bc54586605dbed2a0980894768360700238dc093ac82ff" gracePeriod=30 Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.297598 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.380814 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dfce017-0fe6-4613-910b-2c0f88af8bb2-logs\") pod \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.380982 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data-custom\") pod \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.381020 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-combined-ca-bundle\") pod \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.381078 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data\") pod \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.381108 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j52pf\" (UniqueName: \"kubernetes.io/projected/6dfce017-0fe6-4613-910b-2c0f88af8bb2-kube-api-access-j52pf\") pod \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\" (UID: \"6dfce017-0fe6-4613-910b-2c0f88af8bb2\") " Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.382408 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6dfce017-0fe6-4613-910b-2c0f88af8bb2-logs" (OuterVolumeSpecName: "logs") pod "6dfce017-0fe6-4613-910b-2c0f88af8bb2" (UID: "6dfce017-0fe6-4613-910b-2c0f88af8bb2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.384515 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dfce017-0fe6-4613-910b-2c0f88af8bb2-kube-api-access-j52pf" (OuterVolumeSpecName: "kube-api-access-j52pf") pod "6dfce017-0fe6-4613-910b-2c0f88af8bb2" (UID: "6dfce017-0fe6-4613-910b-2c0f88af8bb2"). InnerVolumeSpecName "kube-api-access-j52pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.385640 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6dfce017-0fe6-4613-910b-2c0f88af8bb2" (UID: "6dfce017-0fe6-4613-910b-2c0f88af8bb2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.409603 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6dfce017-0fe6-4613-910b-2c0f88af8bb2" (UID: "6dfce017-0fe6-4613-910b-2c0f88af8bb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.460190 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data" (OuterVolumeSpecName: "config-data") pod "6dfce017-0fe6-4613-910b-2c0f88af8bb2" (UID: "6dfce017-0fe6-4613-910b-2c0f88af8bb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.482843 5012 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.482872 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.482881 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dfce017-0fe6-4613-910b-2c0f88af8bb2-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.482890 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j52pf\" (UniqueName: \"kubernetes.io/projected/6dfce017-0fe6-4613-910b-2c0f88af8bb2-kube-api-access-j52pf\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.482899 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6dfce017-0fe6-4613-910b-2c0f88af8bb2-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.808770 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.809208 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.809883 5012 scope.go:117] "RemoveContainer" containerID="4812a8f6df189761983e7fbdb500126b62d33c0b69d53f9becfbce526c3f3865" Feb 19 05:44:11 crc kubenswrapper[5012]: E0219 05:44:11.810085 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(7fdaa495-6cde-409a-871a-e334ca3f2a91)\"" pod="openstack/watcher-decision-engine-0" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.884337 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75cc7d9585-x8r8l" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon" probeResult="failure" output="Get \"http://10.217.0.157:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.157:8080: connect: connection refused" Feb 19 05:44:11 crc kubenswrapper[5012]: I0219 05:44:11.884456 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.058109 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.169:9322/\": read tcp 10.217.0.2:46316->10.217.0.169:9322: read: connection reset by peer" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.058114 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.169:9322/\": read tcp 10.217.0.2:46314->10.217.0.169:9322: read: connection reset by peer" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.107907 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7b574779c9-x2bsv" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.232597 5012 generic.go:334] "Generic (PLEG): container finished" podID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerID="4c7e7897254d29f17ce8fe214986663a24ab7ca2a73051f5e809d6f1daf31a29" exitCode=0 Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.232673 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"17c5eb4a-b8b3-4178-b5a0-2a37211266e6","Type":"ContainerDied","Data":"4c7e7897254d29f17ce8fe214986663a24ab7ca2a73051f5e809d6f1daf31a29"} Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.239922 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-778557f86b-hp4xf" event={"ID":"6dfce017-0fe6-4613-910b-2c0f88af8bb2","Type":"ContainerDied","Data":"831c1b2e39b299e04f560adb31739eb0da9f5a5165d710984ac8d2ab457658e9"} Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.239957 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-778557f86b-hp4xf" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.239968 5012 scope.go:117] "RemoveContainer" containerID="0a1428fe2110ceec4a472e101ab178eb05366af10098eb515e5229853c308ba9" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.256406 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ff88b6c7c-5bg66" event={"ID":"eb805277-3dfc-4810-9845-3ba928d262c2","Type":"ContainerStarted","Data":"5baad8992d0e0a0354c70247939ace59bcd61af49dbf633317c1595c364e8821"} Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.256448 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5ff88b6c7c-5bg66" event={"ID":"eb805277-3dfc-4810-9845-3ba928d262c2","Type":"ContainerStarted","Data":"853a12ed08e2f2e0f8f4850de102e800017933151f3260846449b2588200be43"} Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.257524 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.262557 5012 generic.go:334] "Generic (PLEG): container finished" podID="d214ce94-6c65-4641-a1e2-21f5f920ecec" containerID="f9417f3089ab939acabaf087bdedc14bb6991a7978946e02fec09196a1d9ec1c" exitCode=143 Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.262601 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c6b5c5b7b-9nnqj" event={"ID":"d214ce94-6c65-4641-a1e2-21f5f920ecec","Type":"ContainerDied","Data":"f9417f3089ab939acabaf087bdedc14bb6991a7978946e02fec09196a1d9ec1c"} Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.267020 5012 scope.go:117] "RemoveContainer" containerID="fe85e93188d20a0757f4ff89e6ad6e7cd4a5a7fc9569c748b0fe68bce7f50e89" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.279734 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5ff88b6c7c-5bg66" podStartSLOduration=3.279720115 podStartE2EDuration="3.279720115s" podCreationTimestamp="2026-02-19 05:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:12.276022335 +0000 UTC m=+1148.309344904" watchObservedRunningTime="2026-02-19 05:44:12.279720115 +0000 UTC m=+1148.313042684" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.300541 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-778557f86b-hp4xf"] Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.305619 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-778557f86b-hp4xf"] Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.578852 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.604204 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-logs\") pod \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.604465 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-combined-ca-bundle\") pod \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.604530 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5wsl\" (UniqueName: \"kubernetes.io/projected/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-kube-api-access-m5wsl\") pod \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.604699 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-custom-prometheus-ca\") pod \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.604802 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-config-data\") pod \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\" (UID: \"17c5eb4a-b8b3-4178-b5a0-2a37211266e6\") " Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.606887 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-logs" (OuterVolumeSpecName: "logs") pod "17c5eb4a-b8b3-4178-b5a0-2a37211266e6" (UID: "17c5eb4a-b8b3-4178-b5a0-2a37211266e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.629719 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-kube-api-access-m5wsl" (OuterVolumeSpecName: "kube-api-access-m5wsl") pod "17c5eb4a-b8b3-4178-b5a0-2a37211266e6" (UID: "17c5eb4a-b8b3-4178-b5a0-2a37211266e6"). InnerVolumeSpecName "kube-api-access-m5wsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.678521 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17c5eb4a-b8b3-4178-b5a0-2a37211266e6" (UID: "17c5eb4a-b8b3-4178-b5a0-2a37211266e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.696865 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "17c5eb4a-b8b3-4178-b5a0-2a37211266e6" (UID: "17c5eb4a-b8b3-4178-b5a0-2a37211266e6"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.708820 5012 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.708847 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.708855 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.708863 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5wsl\" (UniqueName: \"kubernetes.io/projected/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-kube-api-access-m5wsl\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.714447 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-config-data" (OuterVolumeSpecName: "config-data") pod "17c5eb4a-b8b3-4178-b5a0-2a37211266e6" (UID: "17c5eb4a-b8b3-4178-b5a0-2a37211266e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.730842 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" path="/var/lib/kubelet/pods/6dfce017-0fe6-4613-910b-2c0f88af8bb2/volumes" Feb 19 05:44:12 crc kubenswrapper[5012]: I0219 05:44:12.811210 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17c5eb4a-b8b3-4178-b5a0-2a37211266e6-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.271236 5012 generic.go:334] "Generic (PLEG): container finished" podID="d214ce94-6c65-4641-a1e2-21f5f920ecec" containerID="1bf5d73af424c2f421bc54586605dbed2a0980894768360700238dc093ac82ff" exitCode=0 Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.271327 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c6b5c5b7b-9nnqj" event={"ID":"d214ce94-6c65-4641-a1e2-21f5f920ecec","Type":"ContainerDied","Data":"1bf5d73af424c2f421bc54586605dbed2a0980894768360700238dc093ac82ff"} Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.271758 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c6b5c5b7b-9nnqj" event={"ID":"d214ce94-6c65-4641-a1e2-21f5f920ecec","Type":"ContainerDied","Data":"020e2e77d5547a74ce74ede9f57616121d05cdbb046cf4e2e88cca4fa12f2d3b"} Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.271782 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="020e2e77d5547a74ce74ede9f57616121d05cdbb046cf4e2e88cca4fa12f2d3b" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.273953 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"17c5eb4a-b8b3-4178-b5a0-2a37211266e6","Type":"ContainerDied","Data":"1f8ff58170fed0be8d7680ffb942663aaa5ec3f1c388578dbd28c9e5432c8ac1"} Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.274019 5012 scope.go:117] "RemoveContainer" containerID="4c7e7897254d29f17ce8fe214986663a24ab7ca2a73051f5e809d6f1daf31a29" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.274039 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.351875 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.362772 5012 scope.go:117] "RemoveContainer" containerID="65be4651ae750a28ef010be6e5423125eee000964a57e57affa6249b22b2eb91" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.388216 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.402673 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.420646 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Feb 19 05:44:13 crc kubenswrapper[5012]: E0219 05:44:13.420985 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerName="watcher-api-log" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421000 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerName="watcher-api-log" Feb 19 05:44:13 crc kubenswrapper[5012]: E0219 05:44:13.421010 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerName="barbican-api" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421017 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerName="barbican-api" Feb 19 05:44:13 crc kubenswrapper[5012]: E0219 05:44:13.421028 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerName="barbican-api-log" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421034 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerName="barbican-api-log" Feb 19 05:44:13 crc kubenswrapper[5012]: E0219 05:44:13.421052 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d214ce94-6c65-4641-a1e2-21f5f920ecec" containerName="placement-api" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421058 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d214ce94-6c65-4641-a1e2-21f5f920ecec" containerName="placement-api" Feb 19 05:44:13 crc kubenswrapper[5012]: E0219 05:44:13.421069 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerName="watcher-api" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421086 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerName="watcher-api" Feb 19 05:44:13 crc kubenswrapper[5012]: E0219 05:44:13.421102 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d214ce94-6c65-4641-a1e2-21f5f920ecec" containerName="placement-log" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421107 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d214ce94-6c65-4641-a1e2-21f5f920ecec" containerName="placement-log" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421285 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerName="barbican-api-log" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421294 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dfce017-0fe6-4613-910b-2c0f88af8bb2" containerName="barbican-api" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421321 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d214ce94-6c65-4641-a1e2-21f5f920ecec" containerName="placement-api" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421331 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerName="watcher-api-log" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421346 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d214ce94-6c65-4641-a1e2-21f5f920ecec" containerName="placement-log" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.421361 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" containerName="watcher-api" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.422430 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-public-tls-certs\") pod \"d214ce94-6c65-4641-a1e2-21f5f920ecec\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.422469 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-internal-tls-certs\") pod \"d214ce94-6c65-4641-a1e2-21f5f920ecec\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.422514 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d214ce94-6c65-4641-a1e2-21f5f920ecec-logs\") pod \"d214ce94-6c65-4641-a1e2-21f5f920ecec\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.422540 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7kbf\" (UniqueName: \"kubernetes.io/projected/d214ce94-6c65-4641-a1e2-21f5f920ecec-kube-api-access-s7kbf\") pod \"d214ce94-6c65-4641-a1e2-21f5f920ecec\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.422561 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-scripts\") pod \"d214ce94-6c65-4641-a1e2-21f5f920ecec\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.422651 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-combined-ca-bundle\") pod \"d214ce94-6c65-4641-a1e2-21f5f920ecec\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.422673 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-config-data\") pod \"d214ce94-6c65-4641-a1e2-21f5f920ecec\" (UID: \"d214ce94-6c65-4641-a1e2-21f5f920ecec\") " Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.424668 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d214ce94-6c65-4641-a1e2-21f5f920ecec-logs" (OuterVolumeSpecName: "logs") pod "d214ce94-6c65-4641-a1e2-21f5f920ecec" (UID: "d214ce94-6c65-4641-a1e2-21f5f920ecec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.433173 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.437119 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.437552 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.440131 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.441947 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-scripts" (OuterVolumeSpecName: "scripts") pod "d214ce94-6c65-4641-a1e2-21f5f920ecec" (UID: "d214ce94-6c65-4641-a1e2-21f5f920ecec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.472925 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.507692 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d214ce94-6c65-4641-a1e2-21f5f920ecec-kube-api-access-s7kbf" (OuterVolumeSpecName: "kube-api-access-s7kbf") pod "d214ce94-6c65-4641-a1e2-21f5f920ecec" (UID: "d214ce94-6c65-4641-a1e2-21f5f920ecec"). InnerVolumeSpecName "kube-api-access-s7kbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.525903 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-config-data\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.525995 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.526014 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.526052 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqmhn\" (UniqueName: \"kubernetes.io/projected/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-kube-api-access-nqmhn\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.526071 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-logs\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.526094 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.526153 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-public-tls-certs\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.526198 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d214ce94-6c65-4641-a1e2-21f5f920ecec-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.526209 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7kbf\" (UniqueName: \"kubernetes.io/projected/d214ce94-6c65-4641-a1e2-21f5f920ecec-kube-api-access-s7kbf\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.526220 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.569198 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-config-data" (OuterVolumeSpecName: "config-data") pod "d214ce94-6c65-4641-a1e2-21f5f920ecec" (UID: "d214ce94-6c65-4641-a1e2-21f5f920ecec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.599178 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d214ce94-6c65-4641-a1e2-21f5f920ecec" (UID: "d214ce94-6c65-4641-a1e2-21f5f920ecec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.627602 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-public-tls-certs\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.627646 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-config-data\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.627712 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.627729 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.627768 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqmhn\" (UniqueName: \"kubernetes.io/projected/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-kube-api-access-nqmhn\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.627789 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-logs\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.627812 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.627858 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.627873 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.629404 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-logs\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.636220 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.636924 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-public-tls-certs\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.639735 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.639949 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.640126 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-config-data\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.647849 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqmhn\" (UniqueName: \"kubernetes.io/projected/7d74d5de-7e1d-47cc-8aaa-cb303332a03a-kube-api-access-nqmhn\") pod \"watcher-api-0\" (UID: \"7d74d5de-7e1d-47cc-8aaa-cb303332a03a\") " pod="openstack/watcher-api-0" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.648507 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d214ce94-6c65-4641-a1e2-21f5f920ecec" (UID: "d214ce94-6c65-4641-a1e2-21f5f920ecec"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.654465 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d214ce94-6c65-4641-a1e2-21f5f920ecec" (UID: "d214ce94-6c65-4641-a1e2-21f5f920ecec"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.732001 5012 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.732366 5012 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d214ce94-6c65-4641-a1e2-21f5f920ecec-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:13 crc kubenswrapper[5012]: I0219 05:44:13.827001 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.219329 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.313706 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.313723 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a9c1c12b-f055-417b-9300-706f98b0f8cc","Type":"ContainerDied","Data":"ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b"} Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.313783 5012 scope.go:117] "RemoveContainer" containerID="bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.313702 5012 generic.go:334] "Generic (PLEG): container finished" podID="a9c1c12b-f055-417b-9300-706f98b0f8cc" containerID="ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b" exitCode=0 Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.314400 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a9c1c12b-f055-417b-9300-706f98b0f8cc","Type":"ContainerDied","Data":"700c4e558c2fc29ae4be5133cfa56a73c8a1d1f1fb5ea15ea68c90c99124dbe1"} Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.314442 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c6b5c5b7b-9nnqj" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.354761 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data\") pod \"a9c1c12b-f055-417b-9300-706f98b0f8cc\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.356794 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data-custom\") pod \"a9c1c12b-f055-417b-9300-706f98b0f8cc\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.356887 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-combined-ca-bundle\") pod \"a9c1c12b-f055-417b-9300-706f98b0f8cc\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.356911 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-scripts\") pod \"a9c1c12b-f055-417b-9300-706f98b0f8cc\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.356935 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cch2x\" (UniqueName: \"kubernetes.io/projected/a9c1c12b-f055-417b-9300-706f98b0f8cc-kube-api-access-cch2x\") pod \"a9c1c12b-f055-417b-9300-706f98b0f8cc\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.356998 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9c1c12b-f055-417b-9300-706f98b0f8cc-etc-machine-id\") pod \"a9c1c12b-f055-417b-9300-706f98b0f8cc\" (UID: \"a9c1c12b-f055-417b-9300-706f98b0f8cc\") " Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.360677 5012 scope.go:117] "RemoveContainer" containerID="ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.362557 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9c1c12b-f055-417b-9300-706f98b0f8cc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a9c1c12b-f055-417b-9300-706f98b0f8cc" (UID: "a9c1c12b-f055-417b-9300-706f98b0f8cc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.365075 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c1c12b-f055-417b-9300-706f98b0f8cc-kube-api-access-cch2x" (OuterVolumeSpecName: "kube-api-access-cch2x") pod "a9c1c12b-f055-417b-9300-706f98b0f8cc" (UID: "a9c1c12b-f055-417b-9300-706f98b0f8cc"). InnerVolumeSpecName "kube-api-access-cch2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.375803 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a9c1c12b-f055-417b-9300-706f98b0f8cc" (UID: "a9c1c12b-f055-417b-9300-706f98b0f8cc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.379951 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5c6b5c5b7b-9nnqj"] Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.384443 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-scripts" (OuterVolumeSpecName: "scripts") pod "a9c1c12b-f055-417b-9300-706f98b0f8cc" (UID: "a9c1c12b-f055-417b-9300-706f98b0f8cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.391890 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5c6b5c5b7b-9nnqj"] Feb 19 05:44:14 crc kubenswrapper[5012]: W0219 05:44:14.440958 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d74d5de_7e1d_47cc_8aaa_cb303332a03a.slice/crio-5a97027ceaa1b3242b364b1c3887838d74e75f243a870b871c16fde1bed831b3 WatchSource:0}: Error finding container 5a97027ceaa1b3242b364b1c3887838d74e75f243a870b871c16fde1bed831b3: Status 404 returned error can't find the container with id 5a97027ceaa1b3242b364b1c3887838d74e75f243a870b871c16fde1bed831b3 Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.460060 5012 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9c1c12b-f055-417b-9300-706f98b0f8cc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.460095 5012 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.460109 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.460120 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cch2x\" (UniqueName: \"kubernetes.io/projected/a9c1c12b-f055-417b-9300-706f98b0f8cc-kube-api-access-cch2x\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.480461 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.501549 5012 scope.go:117] "RemoveContainer" containerID="bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.501552 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9c1c12b-f055-417b-9300-706f98b0f8cc" (UID: "a9c1c12b-f055-417b-9300-706f98b0f8cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:14 crc kubenswrapper[5012]: E0219 05:44:14.502486 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0\": container with ID starting with bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0 not found: ID does not exist" containerID="bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.502546 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0"} err="failed to get container status \"bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0\": rpc error: code = NotFound desc = could not find container \"bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0\": container with ID starting with bff9ea0b40044a2d8dba6e1e446d9d5e894b8018b61c01da7c1ced3c35dd9de0 not found: ID does not exist" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.502576 5012 scope.go:117] "RemoveContainer" containerID="ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b" Feb 19 05:44:14 crc kubenswrapper[5012]: E0219 05:44:14.505055 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b\": container with ID starting with ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b not found: ID does not exist" containerID="ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.505079 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b"} err="failed to get container status \"ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b\": rpc error: code = NotFound desc = could not find container \"ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b\": container with ID starting with ebd4fed6ae2d20124d54c82a0bf10498c1cf45de457e508d13e1bdf3cc19bc1b not found: ID does not exist" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.535292 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data" (OuterVolumeSpecName: "config-data") pod "a9c1c12b-f055-417b-9300-706f98b0f8cc" (UID: "a9c1c12b-f055-417b-9300-706f98b0f8cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.561709 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.561738 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c1c12b-f055-417b-9300-706f98b0f8cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.650391 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.659593 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.679829 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 05:44:14 crc kubenswrapper[5012]: E0219 05:44:14.680313 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c1c12b-f055-417b-9300-706f98b0f8cc" containerName="cinder-scheduler" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.680333 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c1c12b-f055-417b-9300-706f98b0f8cc" containerName="cinder-scheduler" Feb 19 05:44:14 crc kubenswrapper[5012]: E0219 05:44:14.680344 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c1c12b-f055-417b-9300-706f98b0f8cc" containerName="probe" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.680353 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c1c12b-f055-417b-9300-706f98b0f8cc" containerName="probe" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.680650 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9c1c12b-f055-417b-9300-706f98b0f8cc" containerName="cinder-scheduler" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.680667 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9c1c12b-f055-417b-9300-706f98b0f8cc" containerName="probe" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.681861 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.685862 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.743153 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17c5eb4a-b8b3-4178-b5a0-2a37211266e6" path="/var/lib/kubelet/pods/17c5eb4a-b8b3-4178-b5a0-2a37211266e6/volumes" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.743783 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9c1c12b-f055-417b-9300-706f98b0f8cc" path="/var/lib/kubelet/pods/a9c1c12b-f055-417b-9300-706f98b0f8cc/volumes" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.745369 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d214ce94-6c65-4641-a1e2-21f5f920ecec" path="/var/lib/kubelet/pods/d214ce94-6c65-4641-a1e2-21f5f920ecec/volumes" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.746404 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.768917 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vtfb\" (UniqueName: \"kubernetes.io/projected/42946b07-c256-43a7-99d0-45f94c019663-kube-api-access-2vtfb\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.769024 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42946b07-c256-43a7-99d0-45f94c019663-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.769089 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-config-data\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.769148 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.769168 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-scripts\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.769191 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.870872 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vtfb\" (UniqueName: \"kubernetes.io/projected/42946b07-c256-43a7-99d0-45f94c019663-kube-api-access-2vtfb\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.870954 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42946b07-c256-43a7-99d0-45f94c019663-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.871033 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-config-data\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.871070 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42946b07-c256-43a7-99d0-45f94c019663-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.871096 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.871176 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-scripts\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.871240 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.875729 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-scripts\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.878139 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-config-data\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.878950 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.883749 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42946b07-c256-43a7-99d0-45f94c019663-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:14 crc kubenswrapper[5012]: I0219 05:44:14.888424 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vtfb\" (UniqueName: \"kubernetes.io/projected/42946b07-c256-43a7-99d0-45f94c019663-kube-api-access-2vtfb\") pod \"cinder-scheduler-0\" (UID: \"42946b07-c256-43a7-99d0-45f94c019663\") " pod="openstack/cinder-scheduler-0" Feb 19 05:44:15 crc kubenswrapper[5012]: I0219 05:44:15.000672 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 05:44:15 crc kubenswrapper[5012]: I0219 05:44:15.346500 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7d74d5de-7e1d-47cc-8aaa-cb303332a03a","Type":"ContainerStarted","Data":"a7908398478d5b196be10ac474c1cbebad2ba060379ae3af8ceb4482a8c331ad"} Feb 19 05:44:15 crc kubenswrapper[5012]: I0219 05:44:15.346862 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7d74d5de-7e1d-47cc-8aaa-cb303332a03a","Type":"ContainerStarted","Data":"8a1bbdc39025fc8ea5f32cc89279b1b49872252c88e364ca4a448083da327fe8"} Feb 19 05:44:15 crc kubenswrapper[5012]: I0219 05:44:15.346874 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7d74d5de-7e1d-47cc-8aaa-cb303332a03a","Type":"ContainerStarted","Data":"5a97027ceaa1b3242b364b1c3887838d74e75f243a870b871c16fde1bed831b3"} Feb 19 05:44:15 crc kubenswrapper[5012]: I0219 05:44:15.348414 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 05:44:15 crc kubenswrapper[5012]: I0219 05:44:15.414205 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.414183377 podStartE2EDuration="2.414183377s" podCreationTimestamp="2026-02-19 05:44:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:15.378163506 +0000 UTC m=+1151.411486075" watchObservedRunningTime="2026-02-19 05:44:15.414183377 +0000 UTC m=+1151.447505946" Feb 19 05:44:15 crc kubenswrapper[5012]: I0219 05:44:15.544353 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:44:15 crc kubenswrapper[5012]: I0219 05:44:15.619889 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69cc8c4d6f-zkg8h"] Feb 19 05:44:15 crc kubenswrapper[5012]: I0219 05:44:15.620142 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" podUID="848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" containerName="dnsmasq-dns" containerID="cri-o://cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23" gracePeriod=10 Feb 19 05:44:15 crc kubenswrapper[5012]: I0219 05:44:15.696494 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 05:44:15 crc kubenswrapper[5012]: W0219 05:44:15.726473 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42946b07_c256_43a7_99d0_45f94c019663.slice/crio-4d49f99cf63c7a4469218344357949aff2f5db7cd48450372ee18d0677e9d8bf WatchSource:0}: Error finding container 4d49f99cf63c7a4469218344357949aff2f5db7cd48450372ee18d0677e9d8bf: Status 404 returned error can't find the container with id 4d49f99cf63c7a4469218344357949aff2f5db7cd48450372ee18d0677e9d8bf Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.065445 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.067136 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.069507 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.070598 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-99wcv" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.077531 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.077891 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.098925 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54hqg\" (UniqueName: \"kubernetes.io/projected/75258dbe-c223-4e55-92a6-8e588745294a-kube-api-access-54hqg\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.098991 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/75258dbe-c223-4e55-92a6-8e588745294a-openstack-config-secret\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.099032 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/75258dbe-c223-4e55-92a6-8e588745294a-openstack-config\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.099053 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75258dbe-c223-4e55-92a6-8e588745294a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.199733 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54hqg\" (UniqueName: \"kubernetes.io/projected/75258dbe-c223-4e55-92a6-8e588745294a-kube-api-access-54hqg\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.199797 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/75258dbe-c223-4e55-92a6-8e588745294a-openstack-config-secret\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.199837 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/75258dbe-c223-4e55-92a6-8e588745294a-openstack-config\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.199859 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75258dbe-c223-4e55-92a6-8e588745294a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.203017 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/75258dbe-c223-4e55-92a6-8e588745294a-openstack-config\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.215882 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/75258dbe-c223-4e55-92a6-8e588745294a-openstack-config-secret\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.219492 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54hqg\" (UniqueName: \"kubernetes.io/projected/75258dbe-c223-4e55-92a6-8e588745294a-kube-api-access-54hqg\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.227891 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75258dbe-c223-4e55-92a6-8e588745294a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"75258dbe-c223-4e55-92a6-8e588745294a\") " pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.319261 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.396179 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.405865 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-sb\") pod \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.405975 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-swift-storage-0\") pod \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.406025 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-nb\") pod \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.406058 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-config\") pod \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.406120 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frcn7\" (UniqueName: \"kubernetes.io/projected/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-kube-api-access-frcn7\") pod \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.406161 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-svc\") pod \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\" (UID: \"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7\") " Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.406459 5012 generic.go:334] "Generic (PLEG): container finished" podID="848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" containerID="cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23" exitCode=0 Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.406506 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" event={"ID":"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7","Type":"ContainerDied","Data":"cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23"} Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.406534 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" event={"ID":"848d11a3-0f68-49f2-8cd6-d00f53f5b0d7","Type":"ContainerDied","Data":"a1291378cdde1b6340e354ff4d89e75f3fa2d7a84c8a3f64370b1decfc0c8b1c"} Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.406549 5012 scope.go:117] "RemoveContainer" containerID="cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.406628 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69cc8c4d6f-zkg8h" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.456258 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"42946b07-c256-43a7-99d0-45f94c019663","Type":"ContainerStarted","Data":"4d49f99cf63c7a4469218344357949aff2f5db7cd48450372ee18d0677e9d8bf"} Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.481604 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-kube-api-access-frcn7" (OuterVolumeSpecName: "kube-api-access-frcn7") pod "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" (UID: "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7"). InnerVolumeSpecName "kube-api-access-frcn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.512664 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frcn7\" (UniqueName: \"kubernetes.io/projected/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-kube-api-access-frcn7\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.579137 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" (UID: "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.605295 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" (UID: "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.614081 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" (UID: "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.614548 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.614562 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.614573 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.615775 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" (UID: "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.661596 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-config" (OuterVolumeSpecName: "config") pod "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" (UID: "848d11a3-0f68-49f2-8cd6-d00f53f5b0d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.670743 5012 scope.go:117] "RemoveContainer" containerID="46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.716259 5012 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.716295 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.719107 5012 scope.go:117] "RemoveContainer" containerID="cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23" Feb 19 05:44:16 crc kubenswrapper[5012]: E0219 05:44:16.719851 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23\": container with ID starting with cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23 not found: ID does not exist" containerID="cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.719909 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23"} err="failed to get container status \"cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23\": rpc error: code = NotFound desc = could not find container \"cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23\": container with ID starting with cfab3349ce09c487714ea93f0e0d0a661f4da7177bf3a014a98028faadf38b23 not found: ID does not exist" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.719934 5012 scope.go:117] "RemoveContainer" containerID="46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe" Feb 19 05:44:16 crc kubenswrapper[5012]: E0219 05:44:16.720799 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe\": container with ID starting with 46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe not found: ID does not exist" containerID="46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.720824 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe"} err="failed to get container status \"46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe\": rpc error: code = NotFound desc = could not find container \"46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe\": container with ID starting with 46313d624f00cfdb15940455127268d47e601f86b5d2c3b5048eb8883755b3fe not found: ID does not exist" Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.764586 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69cc8c4d6f-zkg8h"] Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.786241 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69cc8c4d6f-zkg8h"] Feb 19 05:44:16 crc kubenswrapper[5012]: I0219 05:44:16.978180 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 05:44:17 crc kubenswrapper[5012]: I0219 05:44:17.466478 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"75258dbe-c223-4e55-92a6-8e588745294a","Type":"ContainerStarted","Data":"7e7ab391141582e000d1039683e216e4c6d0486f5dd4ddf726f4f452bb59b0db"} Feb 19 05:44:17 crc kubenswrapper[5012]: I0219 05:44:17.472453 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"42946b07-c256-43a7-99d0-45f94c019663","Type":"ContainerStarted","Data":"47a3ea34ecfaad01acb97532a27081ad24ab168ffd93d9eb4032625cfdc5a3fd"} Feb 19 05:44:17 crc kubenswrapper[5012]: I0219 05:44:17.477369 5012 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 05:44:17 crc kubenswrapper[5012]: I0219 05:44:17.883080 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 19 05:44:18 crc kubenswrapper[5012]: I0219 05:44:18.497329 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"42946b07-c256-43a7-99d0-45f94c019663","Type":"ContainerStarted","Data":"4b262b6a4abbd218c72a86b7b7bea169f6aaad28e2988ee8eea6494fba91952a"} Feb 19 05:44:18 crc kubenswrapper[5012]: I0219 05:44:18.529038 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.529015128 podStartE2EDuration="4.529015128s" podCreationTimestamp="2026-02-19 05:44:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:18.522680023 +0000 UTC m=+1154.556002582" watchObservedRunningTime="2026-02-19 05:44:18.529015128 +0000 UTC m=+1154.562337697" Feb 19 05:44:18 crc kubenswrapper[5012]: I0219 05:44:18.717187 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" path="/var/lib/kubelet/pods/848d11a3-0f68-49f2-8cd6-d00f53f5b0d7/volumes" Feb 19 05:44:18 crc kubenswrapper[5012]: I0219 05:44:18.761680 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 19 05:44:18 crc kubenswrapper[5012]: I0219 05:44:18.828082 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.001793 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.527784 5012 generic.go:334] "Generic (PLEG): container finished" podID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerID="3fcdc6a7de1157e87df26c6381be0f82492f8c4422bc5e6ab2f42667c4a696ee" exitCode=137 Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.527870 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75cc7d9585-x8r8l" event={"ID":"7c163961-185c-418b-a0f5-a4d55b59f3ec","Type":"ContainerDied","Data":"3fcdc6a7de1157e87df26c6381be0f82492f8c4422bc5e6ab2f42667c4a696ee"} Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.528198 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75cc7d9585-x8r8l" event={"ID":"7c163961-185c-418b-a0f5-a4d55b59f3ec","Type":"ContainerDied","Data":"7cfa7cf48e4edcddab8aec2d0bfb0aeea8557ac2316f0d4b2e00c1aa2310cba1"} Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.528212 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cfa7cf48e4edcddab8aec2d0bfb0aeea8557ac2316f0d4b2e00c1aa2310cba1" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.623271 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.768366 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c163961-185c-418b-a0f5-a4d55b59f3ec-logs\") pod \"7c163961-185c-418b-a0f5-a4d55b59f3ec\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.768537 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-scripts\") pod \"7c163961-185c-418b-a0f5-a4d55b59f3ec\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.768634 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c163961-185c-418b-a0f5-a4d55b59f3ec-horizon-secret-key\") pod \"7c163961-185c-418b-a0f5-a4d55b59f3ec\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.768762 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9sfn\" (UniqueName: \"kubernetes.io/projected/7c163961-185c-418b-a0f5-a4d55b59f3ec-kube-api-access-d9sfn\") pod \"7c163961-185c-418b-a0f5-a4d55b59f3ec\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.768785 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-config-data\") pod \"7c163961-185c-418b-a0f5-a4d55b59f3ec\" (UID: \"7c163961-185c-418b-a0f5-a4d55b59f3ec\") " Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.773705 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c163961-185c-418b-a0f5-a4d55b59f3ec-logs" (OuterVolumeSpecName: "logs") pod "7c163961-185c-418b-a0f5-a4d55b59f3ec" (UID: "7c163961-185c-418b-a0f5-a4d55b59f3ec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.778547 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c163961-185c-418b-a0f5-a4d55b59f3ec-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7c163961-185c-418b-a0f5-a4d55b59f3ec" (UID: "7c163961-185c-418b-a0f5-a4d55b59f3ec"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.778585 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c163961-185c-418b-a0f5-a4d55b59f3ec-kube-api-access-d9sfn" (OuterVolumeSpecName: "kube-api-access-d9sfn") pod "7c163961-185c-418b-a0f5-a4d55b59f3ec" (UID: "7c163961-185c-418b-a0f5-a4d55b59f3ec"). InnerVolumeSpecName "kube-api-access-d9sfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.797347 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-config-data" (OuterVolumeSpecName: "config-data") pod "7c163961-185c-418b-a0f5-a4d55b59f3ec" (UID: "7c163961-185c-418b-a0f5-a4d55b59f3ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.805892 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-scripts" (OuterVolumeSpecName: "scripts") pod "7c163961-185c-418b-a0f5-a4d55b59f3ec" (UID: "7c163961-185c-418b-a0f5-a4d55b59f3ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.875507 5012 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c163961-185c-418b-a0f5-a4d55b59f3ec-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.875540 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9sfn\" (UniqueName: \"kubernetes.io/projected/7c163961-185c-418b-a0f5-a4d55b59f3ec-kube-api-access-d9sfn\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.875554 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.875563 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c163961-185c-418b-a0f5-a4d55b59f3ec-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:20 crc kubenswrapper[5012]: I0219 05:44:20.875571 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c163961-185c-418b-a0f5-a4d55b59f3ec-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.235393 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-59bfbf7475-v98h9"] Feb 19 05:44:21 crc kubenswrapper[5012]: E0219 05:44:21.235747 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" containerName="dnsmasq-dns" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.235760 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" containerName="dnsmasq-dns" Feb 19 05:44:21 crc kubenswrapper[5012]: E0219 05:44:21.235775 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.235781 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon" Feb 19 05:44:21 crc kubenswrapper[5012]: E0219 05:44:21.235805 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon-log" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.235814 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon-log" Feb 19 05:44:21 crc kubenswrapper[5012]: E0219 05:44:21.235829 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" containerName="init" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.235835 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" containerName="init" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.236003 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon-log" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.236030 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="848d11a3-0f68-49f2-8cd6-d00f53f5b0d7" containerName="dnsmasq-dns" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.236043 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" containerName="horizon" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.236945 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.239975 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.240272 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.243043 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.262940 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-59bfbf7475-v98h9"] Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.387362 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c9aa274-240d-4d50-b38a-754dd493f351-log-httpd\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.387424 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwrcf\" (UniqueName: \"kubernetes.io/projected/4c9aa274-240d-4d50-b38a-754dd493f351-kube-api-access-lwrcf\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.387475 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c9aa274-240d-4d50-b38a-754dd493f351-run-httpd\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.387516 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-combined-ca-bundle\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.387605 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-internal-tls-certs\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.387629 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-public-tls-certs\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.387667 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-config-data\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.387705 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4c9aa274-240d-4d50-b38a-754dd493f351-etc-swift\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.492699 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-combined-ca-bundle\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.492828 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-internal-tls-certs\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.492857 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-public-tls-certs\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.492892 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-config-data\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.492928 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4c9aa274-240d-4d50-b38a-754dd493f351-etc-swift\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.492968 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c9aa274-240d-4d50-b38a-754dd493f351-log-httpd\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.492991 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwrcf\" (UniqueName: \"kubernetes.io/projected/4c9aa274-240d-4d50-b38a-754dd493f351-kube-api-access-lwrcf\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.493022 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c9aa274-240d-4d50-b38a-754dd493f351-run-httpd\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.494662 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c9aa274-240d-4d50-b38a-754dd493f351-log-httpd\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.499233 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-public-tls-certs\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.502031 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c9aa274-240d-4d50-b38a-754dd493f351-run-httpd\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.503380 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4c9aa274-240d-4d50-b38a-754dd493f351-etc-swift\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.504052 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-config-data\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.507872 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-combined-ca-bundle\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.511405 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9aa274-240d-4d50-b38a-754dd493f351-internal-tls-certs\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.512100 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwrcf\" (UniqueName: \"kubernetes.io/projected/4c9aa274-240d-4d50-b38a-754dd493f351-kube-api-access-lwrcf\") pod \"swift-proxy-59bfbf7475-v98h9\" (UID: \"4c9aa274-240d-4d50-b38a-754dd493f351\") " pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.538714 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75cc7d9585-x8r8l" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.554878 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.691361 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75cc7d9585-x8r8l"] Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.700458 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-75cc7d9585-x8r8l"] Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.810578 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.811839 5012 scope.go:117] "RemoveContainer" containerID="4812a8f6df189761983e7fbdb500126b62d33c0b69d53f9becfbce526c3f3865" Feb 19 05:44:21 crc kubenswrapper[5012]: I0219 05:44:21.812560 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:22 crc kubenswrapper[5012]: I0219 05:44:22.493375 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-59bfbf7475-v98h9"] Feb 19 05:44:22 crc kubenswrapper[5012]: I0219 05:44:22.554614 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-59bfbf7475-v98h9" event={"ID":"4c9aa274-240d-4d50-b38a-754dd493f351","Type":"ContainerStarted","Data":"9e326161256ccd442b4abda067251f70672f51e2d1e6574a1100881325273363"} Feb 19 05:44:22 crc kubenswrapper[5012]: I0219 05:44:22.564707 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7fdaa495-6cde-409a-871a-e334ca3f2a91","Type":"ContainerStarted","Data":"3fe096d4e76671ad6ed28d2c1acfd3c50b1ec4a14f0f8ab2ef4419008e64c651"} Feb 19 05:44:22 crc kubenswrapper[5012]: I0219 05:44:22.718637 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c163961-185c-418b-a0f5-a4d55b59f3ec" path="/var/lib/kubelet/pods/7c163961-185c-418b-a0f5-a4d55b59f3ec/volumes" Feb 19 05:44:22 crc kubenswrapper[5012]: I0219 05:44:22.852786 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:22 crc kubenswrapper[5012]: I0219 05:44:22.853881 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="ceilometer-central-agent" containerID="cri-o://90ba300b50323aa9b522179eb4980608476a719c46e6c6ece43f44fc2dbdc9ad" gracePeriod=30 Feb 19 05:44:22 crc kubenswrapper[5012]: I0219 05:44:22.854682 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="ceilometer-notification-agent" containerID="cri-o://7d42600135c89d15a2ed647cd5fc2d79a4290622986701fbe5330b3c8214cc54" gracePeriod=30 Feb 19 05:44:22 crc kubenswrapper[5012]: I0219 05:44:22.854704 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="proxy-httpd" containerID="cri-o://6762263a345e4365421a46f2f13896eee2b40581b23287e4ae263f9733a40058" gracePeriod=30 Feb 19 05:44:22 crc kubenswrapper[5012]: I0219 05:44:22.854759 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="sg-core" containerID="cri-o://01c17cd2fd8d4c7f25652d74baa178f4238cfbbc1ba02a9f9c5c2148a344aa2a" gracePeriod=30 Feb 19 05:44:22 crc kubenswrapper[5012]: I0219 05:44:22.881770 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.180:3000/\": EOF" Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.593330 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-59bfbf7475-v98h9" event={"ID":"4c9aa274-240d-4d50-b38a-754dd493f351","Type":"ContainerStarted","Data":"dcd35b1e238c144328bbc91eb806c457380d10214f21adb6cef468590b9f0d67"} Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.593375 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-59bfbf7475-v98h9" event={"ID":"4c9aa274-240d-4d50-b38a-754dd493f351","Type":"ContainerStarted","Data":"f20697a066eca49cdf077e485aeb577db36c44950fa97a7c16cc76c2e5e2e40b"} Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.594433 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.594497 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.598872 5012 generic.go:334] "Generic (PLEG): container finished" podID="236f420e-8855-41f8-8b25-813be7b28799" containerID="6762263a345e4365421a46f2f13896eee2b40581b23287e4ae263f9733a40058" exitCode=0 Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.598916 5012 generic.go:334] "Generic (PLEG): container finished" podID="236f420e-8855-41f8-8b25-813be7b28799" containerID="01c17cd2fd8d4c7f25652d74baa178f4238cfbbc1ba02a9f9c5c2148a344aa2a" exitCode=2 Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.598924 5012 generic.go:334] "Generic (PLEG): container finished" podID="236f420e-8855-41f8-8b25-813be7b28799" containerID="90ba300b50323aa9b522179eb4980608476a719c46e6c6ece43f44fc2dbdc9ad" exitCode=0 Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.598948 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236f420e-8855-41f8-8b25-813be7b28799","Type":"ContainerDied","Data":"6762263a345e4365421a46f2f13896eee2b40581b23287e4ae263f9733a40058"} Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.599006 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236f420e-8855-41f8-8b25-813be7b28799","Type":"ContainerDied","Data":"01c17cd2fd8d4c7f25652d74baa178f4238cfbbc1ba02a9f9c5c2148a344aa2a"} Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.599019 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236f420e-8855-41f8-8b25-813be7b28799","Type":"ContainerDied","Data":"90ba300b50323aa9b522179eb4980608476a719c46e6c6ece43f44fc2dbdc9ad"} Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.614034 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-59bfbf7475-v98h9" podStartSLOduration=2.6140125320000003 podStartE2EDuration="2.614012532s" podCreationTimestamp="2026-02-19 05:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:23.610023134 +0000 UTC m=+1159.643345713" watchObservedRunningTime="2026-02-19 05:44:23.614012532 +0000 UTC m=+1159.647335101" Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.828556 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Feb 19 05:44:23 crc kubenswrapper[5012]: I0219 05:44:23.837040 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Feb 19 05:44:24 crc kubenswrapper[5012]: I0219 05:44:24.660030 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Feb 19 05:44:25 crc kubenswrapper[5012]: I0219 05:44:25.207445 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 05:44:26 crc kubenswrapper[5012]: I0219 05:44:26.688559 5012 generic.go:334] "Generic (PLEG): container finished" podID="236f420e-8855-41f8-8b25-813be7b28799" containerID="7d42600135c89d15a2ed647cd5fc2d79a4290622986701fbe5330b3c8214cc54" exitCode=0 Feb 19 05:44:26 crc kubenswrapper[5012]: I0219 05:44:26.688635 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236f420e-8855-41f8-8b25-813be7b28799","Type":"ContainerDied","Data":"7d42600135c89d15a2ed647cd5fc2d79a4290622986701fbe5330b3c8214cc54"} Feb 19 05:44:26 crc kubenswrapper[5012]: I0219 05:44:26.693902 5012 generic.go:334] "Generic (PLEG): container finished" podID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerID="3fe096d4e76671ad6ed28d2c1acfd3c50b1ec4a14f0f8ab2ef4419008e64c651" exitCode=1 Feb 19 05:44:26 crc kubenswrapper[5012]: I0219 05:44:26.693934 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7fdaa495-6cde-409a-871a-e334ca3f2a91","Type":"ContainerDied","Data":"3fe096d4e76671ad6ed28d2c1acfd3c50b1ec4a14f0f8ab2ef4419008e64c651"} Feb 19 05:44:26 crc kubenswrapper[5012]: I0219 05:44:26.693963 5012 scope.go:117] "RemoveContainer" containerID="4812a8f6df189761983e7fbdb500126b62d33c0b69d53f9becfbce526c3f3865" Feb 19 05:44:26 crc kubenswrapper[5012]: I0219 05:44:26.694635 5012 scope.go:117] "RemoveContainer" containerID="3fe096d4e76671ad6ed28d2c1acfd3c50b1ec4a14f0f8ab2ef4419008e64c651" Feb 19 05:44:26 crc kubenswrapper[5012]: E0219 05:44:26.694854 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(7fdaa495-6cde-409a-871a-e334ca3f2a91)\"" pod="openstack/watcher-decision-engine-0" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" Feb 19 05:44:28 crc kubenswrapper[5012]: I0219 05:44:28.130439 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.180:3000/\": dial tcp 10.217.0.180:3000: connect: connection refused" Feb 19 05:44:30 crc kubenswrapper[5012]: I0219 05:44:30.353353 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:44:30 crc kubenswrapper[5012]: I0219 05:44:30.353910 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="74c05972-714b-4cc7-97f6-d4a2c205eb08" containerName="glance-log" containerID="cri-o://c0addf6cc4fd08d20a94ea77846955a7e25ecc21b7ac41291cb427ac997c6c7a" gracePeriod=30 Feb 19 05:44:30 crc kubenswrapper[5012]: I0219 05:44:30.358437 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="74c05972-714b-4cc7-97f6-d4a2c205eb08" containerName="glance-httpd" containerID="cri-o://bc599b5c1fd3d067ccfdc4bf4a2aeefedf9b008aba3555832c150c823f5147fd" gracePeriod=30 Feb 19 05:44:30 crc kubenswrapper[5012]: I0219 05:44:30.741082 5012 generic.go:334] "Generic (PLEG): container finished" podID="74c05972-714b-4cc7-97f6-d4a2c205eb08" containerID="c0addf6cc4fd08d20a94ea77846955a7e25ecc21b7ac41291cb427ac997c6c7a" exitCode=143 Feb 19 05:44:30 crc kubenswrapper[5012]: I0219 05:44:30.741128 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"74c05972-714b-4cc7-97f6-d4a2c205eb08","Type":"ContainerDied","Data":"c0addf6cc4fd08d20a94ea77846955a7e25ecc21b7ac41291cb427ac997c6c7a"} Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.578439 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.604818 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-59bfbf7475-v98h9" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.639606 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.639852 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="50127c6b-476e-473a-877d-00fd5feb6bb4" containerName="glance-log" containerID="cri-o://a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5" gracePeriod=30 Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.639906 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="50127c6b-476e-473a-877d-00fd5feb6bb4" containerName="glance-httpd" containerID="cri-o://4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46" gracePeriod=30 Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.807988 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.808020 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.808982 5012 scope.go:117] "RemoveContainer" containerID="3fe096d4e76671ad6ed28d2c1acfd3c50b1ec4a14f0f8ab2ef4419008e64c651" Feb 19 05:44:31 crc kubenswrapper[5012]: E0219 05:44:31.809246 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(7fdaa495-6cde-409a-871a-e334ca3f2a91)\"" pod="openstack/watcher-decision-engine-0" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.824287 5012 generic.go:334] "Generic (PLEG): container finished" podID="74c05972-714b-4cc7-97f6-d4a2c205eb08" containerID="bc599b5c1fd3d067ccfdc4bf4a2aeefedf9b008aba3555832c150c823f5147fd" exitCode=0 Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.824570 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"74c05972-714b-4cc7-97f6-d4a2c205eb08","Type":"ContainerDied","Data":"bc599b5c1fd3d067ccfdc4bf4a2aeefedf9b008aba3555832c150c823f5147fd"} Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.861010 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.942221 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-combined-ca-bundle\") pod \"236f420e-8855-41f8-8b25-813be7b28799\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.942346 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-sg-core-conf-yaml\") pod \"236f420e-8855-41f8-8b25-813be7b28799\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.942445 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52m69\" (UniqueName: \"kubernetes.io/projected/236f420e-8855-41f8-8b25-813be7b28799-kube-api-access-52m69\") pod \"236f420e-8855-41f8-8b25-813be7b28799\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.942545 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-log-httpd\") pod \"236f420e-8855-41f8-8b25-813be7b28799\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.942587 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-scripts\") pod \"236f420e-8855-41f8-8b25-813be7b28799\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.942848 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-run-httpd\") pod \"236f420e-8855-41f8-8b25-813be7b28799\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.942911 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-config-data\") pod \"236f420e-8855-41f8-8b25-813be7b28799\" (UID: \"236f420e-8855-41f8-8b25-813be7b28799\") " Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.943458 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "236f420e-8855-41f8-8b25-813be7b28799" (UID: "236f420e-8855-41f8-8b25-813be7b28799"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.943608 5012 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.947911 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/236f420e-8855-41f8-8b25-813be7b28799-kube-api-access-52m69" (OuterVolumeSpecName: "kube-api-access-52m69") pod "236f420e-8855-41f8-8b25-813be7b28799" (UID: "236f420e-8855-41f8-8b25-813be7b28799"). InnerVolumeSpecName "kube-api-access-52m69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.948261 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "236f420e-8855-41f8-8b25-813be7b28799" (UID: "236f420e-8855-41f8-8b25-813be7b28799"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.948382 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-scripts" (OuterVolumeSpecName: "scripts") pod "236f420e-8855-41f8-8b25-813be7b28799" (UID: "236f420e-8855-41f8-8b25-813be7b28799"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.965793 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:44:31 crc kubenswrapper[5012]: I0219 05:44:31.986917 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "236f420e-8855-41f8-8b25-813be7b28799" (UID: "236f420e-8855-41f8-8b25-813be7b28799"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.047909 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5tcx\" (UniqueName: \"kubernetes.io/projected/74c05972-714b-4cc7-97f6-d4a2c205eb08-kube-api-access-q5tcx\") pod \"74c05972-714b-4cc7-97f6-d4a2c205eb08\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.047992 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"74c05972-714b-4cc7-97f6-d4a2c205eb08\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.048091 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-logs\") pod \"74c05972-714b-4cc7-97f6-d4a2c205eb08\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.048120 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-scripts\") pod \"74c05972-714b-4cc7-97f6-d4a2c205eb08\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.048156 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-httpd-run\") pod \"74c05972-714b-4cc7-97f6-d4a2c205eb08\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.048198 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-public-tls-certs\") pod \"74c05972-714b-4cc7-97f6-d4a2c205eb08\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.048337 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-config-data\") pod \"74c05972-714b-4cc7-97f6-d4a2c205eb08\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.048387 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-combined-ca-bundle\") pod \"74c05972-714b-4cc7-97f6-d4a2c205eb08\" (UID: \"74c05972-714b-4cc7-97f6-d4a2c205eb08\") " Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.048749 5012 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/236f420e-8855-41f8-8b25-813be7b28799-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.048764 5012 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.048774 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52m69\" (UniqueName: \"kubernetes.io/projected/236f420e-8855-41f8-8b25-813be7b28799-kube-api-access-52m69\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.048785 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.054247 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74c05972-714b-4cc7-97f6-d4a2c205eb08-kube-api-access-q5tcx" (OuterVolumeSpecName: "kube-api-access-q5tcx") pod "74c05972-714b-4cc7-97f6-d4a2c205eb08" (UID: "74c05972-714b-4cc7-97f6-d4a2c205eb08"). InnerVolumeSpecName "kube-api-access-q5tcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.054601 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-logs" (OuterVolumeSpecName: "logs") pod "74c05972-714b-4cc7-97f6-d4a2c205eb08" (UID: "74c05972-714b-4cc7-97f6-d4a2c205eb08"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.058675 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "74c05972-714b-4cc7-97f6-d4a2c205eb08" (UID: "74c05972-714b-4cc7-97f6-d4a2c205eb08"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.058748 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "74c05972-714b-4cc7-97f6-d4a2c205eb08" (UID: "74c05972-714b-4cc7-97f6-d4a2c205eb08"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.060927 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-scripts" (OuterVolumeSpecName: "scripts") pod "74c05972-714b-4cc7-97f6-d4a2c205eb08" (UID: "74c05972-714b-4cc7-97f6-d4a2c205eb08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.062450 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "236f420e-8855-41f8-8b25-813be7b28799" (UID: "236f420e-8855-41f8-8b25-813be7b28799"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.101294 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74c05972-714b-4cc7-97f6-d4a2c205eb08" (UID: "74c05972-714b-4cc7-97f6-d4a2c205eb08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.104155 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-config-data" (OuterVolumeSpecName: "config-data") pod "236f420e-8855-41f8-8b25-813be7b28799" (UID: "236f420e-8855-41f8-8b25-813be7b28799"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.131238 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-config-data" (OuterVolumeSpecName: "config-data") pod "74c05972-714b-4cc7-97f6-d4a2c205eb08" (UID: "74c05972-714b-4cc7-97f6-d4a2c205eb08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.154854 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.154879 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.154889 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5tcx\" (UniqueName: \"kubernetes.io/projected/74c05972-714b-4cc7-97f6-d4a2c205eb08-kube-api-access-q5tcx\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.154899 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.154918 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.154927 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236f420e-8855-41f8-8b25-813be7b28799-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.154936 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.154944 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.154951 5012 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74c05972-714b-4cc7-97f6-d4a2c205eb08-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.157822 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "74c05972-714b-4cc7-97f6-d4a2c205eb08" (UID: "74c05972-714b-4cc7-97f6-d4a2c205eb08"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.188703 5012 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.256871 5012 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.256906 5012 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c05972-714b-4cc7-97f6-d4a2c205eb08-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.835956 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"75258dbe-c223-4e55-92a6-8e588745294a","Type":"ContainerStarted","Data":"f641cbad619b4bb09865d3af7634c8b71722cdfa0c947251105b37553a070d26"} Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.840337 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"236f420e-8855-41f8-8b25-813be7b28799","Type":"ContainerDied","Data":"0b4212ecca9b60999638c1e6662994f4b7843d12f33587c1778eba71df434b72"} Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.840382 5012 scope.go:117] "RemoveContainer" containerID="6762263a345e4365421a46f2f13896eee2b40581b23287e4ae263f9733a40058" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.840501 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.846386 5012 generic.go:334] "Generic (PLEG): container finished" podID="50127c6b-476e-473a-877d-00fd5feb6bb4" containerID="a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5" exitCode=143 Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.846530 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"50127c6b-476e-473a-877d-00fd5feb6bb4","Type":"ContainerDied","Data":"a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5"} Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.850408 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"74c05972-714b-4cc7-97f6-d4a2c205eb08","Type":"ContainerDied","Data":"884f09cbda393c2ecb1a2ab4bc0243e004e662fb5c7beaf39c14a2c689ed4fc6"} Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.850510 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.874654 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.378274568 podStartE2EDuration="16.874630558s" podCreationTimestamp="2026-02-19 05:44:16 +0000 UTC" firstStartedPulling="2026-02-19 05:44:17.011292869 +0000 UTC m=+1153.044615438" lastFinishedPulling="2026-02-19 05:44:31.507648869 +0000 UTC m=+1167.540971428" observedRunningTime="2026-02-19 05:44:32.862245347 +0000 UTC m=+1168.895567936" watchObservedRunningTime="2026-02-19 05:44:32.874630558 +0000 UTC m=+1168.907953117" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.924181 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.928008 5012 scope.go:117] "RemoveContainer" containerID="01c17cd2fd8d4c7f25652d74baa178f4238cfbbc1ba02a9f9c5c2148a344aa2a" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.940055 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.960761 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.970567 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.980608 5012 scope.go:117] "RemoveContainer" containerID="7d42600135c89d15a2ed647cd5fc2d79a4290622986701fbe5330b3c8214cc54" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.985557 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:44:32 crc kubenswrapper[5012]: E0219 05:44:32.986216 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="sg-core" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986239 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="sg-core" Feb 19 05:44:32 crc kubenswrapper[5012]: E0219 05:44:32.986261 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74c05972-714b-4cc7-97f6-d4a2c205eb08" containerName="glance-log" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986272 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="74c05972-714b-4cc7-97f6-d4a2c205eb08" containerName="glance-log" Feb 19 05:44:32 crc kubenswrapper[5012]: E0219 05:44:32.986288 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74c05972-714b-4cc7-97f6-d4a2c205eb08" containerName="glance-httpd" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986319 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="74c05972-714b-4cc7-97f6-d4a2c205eb08" containerName="glance-httpd" Feb 19 05:44:32 crc kubenswrapper[5012]: E0219 05:44:32.986342 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="ceilometer-notification-agent" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986350 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="ceilometer-notification-agent" Feb 19 05:44:32 crc kubenswrapper[5012]: E0219 05:44:32.986360 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="proxy-httpd" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986367 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="proxy-httpd" Feb 19 05:44:32 crc kubenswrapper[5012]: E0219 05:44:32.986381 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="ceilometer-central-agent" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986388 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="ceilometer-central-agent" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986616 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="ceilometer-notification-agent" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986641 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="sg-core" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986656 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="74c05972-714b-4cc7-97f6-d4a2c205eb08" containerName="glance-httpd" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986670 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="ceilometer-central-agent" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986682 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="74c05972-714b-4cc7-97f6-d4a2c205eb08" containerName="glance-log" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.986693 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="236f420e-8855-41f8-8b25-813be7b28799" containerName="proxy-httpd" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.988341 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:44:32 crc kubenswrapper[5012]: I0219 05:44:32.996469 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.003664 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.006262 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.036046 5012 scope.go:117] "RemoveContainer" containerID="90ba300b50323aa9b522179eb4980608476a719c46e6c6ece43f44fc2dbdc9ad" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.043161 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.045170 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.046091 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.048723 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.080057 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.080104 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-config-data\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.080184 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.080206 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.080268 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqwm4\" (UniqueName: \"kubernetes.io/projected/8cfddc12-1c4c-4faf-9edb-71fb80608785-kube-api-access-lqwm4\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.080361 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-scripts\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.080385 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8cfddc12-1c4c-4faf-9edb-71fb80608785-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.080441 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cfddc12-1c4c-4faf-9edb-71fb80608785-logs\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.086444 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.088033 5012 scope.go:117] "RemoveContainer" containerID="bc599b5c1fd3d067ccfdc4bf4a2aeefedf9b008aba3555832c150c823f5147fd" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.114147 5012 scope.go:117] "RemoveContainer" containerID="c0addf6cc4fd08d20a94ea77846955a7e25ecc21b7ac41291cb427ac997c6c7a" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183286 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqwm4\" (UniqueName: \"kubernetes.io/projected/8cfddc12-1c4c-4faf-9edb-71fb80608785-kube-api-access-lqwm4\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183442 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-config-data\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183488 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-scripts\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183523 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8cfddc12-1c4c-4faf-9edb-71fb80608785-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183559 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-run-httpd\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183582 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9h47\" (UniqueName: \"kubernetes.io/projected/1b39b6f2-c394-449b-9c41-1b09eabce119-kube-api-access-m9h47\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183628 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183664 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-scripts\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183690 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cfddc12-1c4c-4faf-9edb-71fb80608785-logs\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183740 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183775 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183802 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-config-data\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183843 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-log-httpd\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183889 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.183918 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.185829 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.185935 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8cfddc12-1c4c-4faf-9edb-71fb80608785-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.185990 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cfddc12-1c4c-4faf-9edb-71fb80608785-logs\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.191393 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.193746 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-scripts\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.194236 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-config-data\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.204290 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cfddc12-1c4c-4faf-9edb-71fb80608785-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.205289 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqwm4\" (UniqueName: \"kubernetes.io/projected/8cfddc12-1c4c-4faf-9edb-71fb80608785-kube-api-access-lqwm4\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.219913 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"8cfddc12-1c4c-4faf-9edb-71fb80608785\") " pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.292526 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.292591 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-log-httpd\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.292670 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-config-data\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.292705 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9h47\" (UniqueName: \"kubernetes.io/projected/1b39b6f2-c394-449b-9c41-1b09eabce119-kube-api-access-m9h47\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.292721 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-run-httpd\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.292748 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.292767 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-scripts\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.294907 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-run-httpd\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.300582 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.300948 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-log-httpd\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.308617 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-config-data\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.311040 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9h47\" (UniqueName: \"kubernetes.io/projected/1b39b6f2-c394-449b-9c41-1b09eabce119-kube-api-access-m9h47\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.323722 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-scripts\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.330238 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.341217 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.374485 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.511326 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.600976 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-httpd-run\") pod \"50127c6b-476e-473a-877d-00fd5feb6bb4\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.601145 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-internal-tls-certs\") pod \"50127c6b-476e-473a-877d-00fd5feb6bb4\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.601182 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-logs\") pod \"50127c6b-476e-473a-877d-00fd5feb6bb4\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.601207 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"50127c6b-476e-473a-877d-00fd5feb6bb4\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.601241 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wq\" (UniqueName: \"kubernetes.io/projected/50127c6b-476e-473a-877d-00fd5feb6bb4-kube-api-access-2d4wq\") pod \"50127c6b-476e-473a-877d-00fd5feb6bb4\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.601398 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-scripts\") pod \"50127c6b-476e-473a-877d-00fd5feb6bb4\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.601450 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-config-data\") pod \"50127c6b-476e-473a-877d-00fd5feb6bb4\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.601623 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-combined-ca-bundle\") pod \"50127c6b-476e-473a-877d-00fd5feb6bb4\" (UID: \"50127c6b-476e-473a-877d-00fd5feb6bb4\") " Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.634746 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-logs" (OuterVolumeSpecName: "logs") pod "50127c6b-476e-473a-877d-00fd5feb6bb4" (UID: "50127c6b-476e-473a-877d-00fd5feb6bb4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.637128 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "50127c6b-476e-473a-877d-00fd5feb6bb4" (UID: "50127c6b-476e-473a-877d-00fd5feb6bb4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.640593 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "50127c6b-476e-473a-877d-00fd5feb6bb4" (UID: "50127c6b-476e-473a-877d-00fd5feb6bb4"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.654473 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50127c6b-476e-473a-877d-00fd5feb6bb4-kube-api-access-2d4wq" (OuterVolumeSpecName: "kube-api-access-2d4wq") pod "50127c6b-476e-473a-877d-00fd5feb6bb4" (UID: "50127c6b-476e-473a-877d-00fd5feb6bb4"). InnerVolumeSpecName "kube-api-access-2d4wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.683531 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-scripts" (OuterVolumeSpecName: "scripts") pod "50127c6b-476e-473a-877d-00fd5feb6bb4" (UID: "50127c6b-476e-473a-877d-00fd5feb6bb4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.700538 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50127c6b-476e-473a-877d-00fd5feb6bb4" (UID: "50127c6b-476e-473a-877d-00fd5feb6bb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.704643 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.704659 5012 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.704670 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50127c6b-476e-473a-877d-00fd5feb6bb4-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.704690 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.704700 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wq\" (UniqueName: \"kubernetes.io/projected/50127c6b-476e-473a-877d-00fd5feb6bb4-kube-api-access-2d4wq\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.704710 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.769157 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-config-data" (OuterVolumeSpecName: "config-data") pod "50127c6b-476e-473a-877d-00fd5feb6bb4" (UID: "50127c6b-476e-473a-877d-00fd5feb6bb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.793686 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "50127c6b-476e-473a-877d-00fd5feb6bb4" (UID: "50127c6b-476e-473a-877d-00fd5feb6bb4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.808467 5012 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.808501 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50127c6b-476e-473a-877d-00fd5feb6bb4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.811831 5012 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.916551 5012 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.922485 5012 generic.go:334] "Generic (PLEG): container finished" podID="50127c6b-476e-473a-877d-00fd5feb6bb4" containerID="4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46" exitCode=0 Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.922948 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.924375 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"50127c6b-476e-473a-877d-00fd5feb6bb4","Type":"ContainerDied","Data":"4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46"} Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.924468 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"50127c6b-476e-473a-877d-00fd5feb6bb4","Type":"ContainerDied","Data":"9f6241f52b36b9304734fa39b59f3e6db469ba06ede3efe69c7f2c281f65bc4e"} Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.924497 5012 scope.go:117] "RemoveContainer" containerID="4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46" Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.976454 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:33 crc kubenswrapper[5012]: I0219 05:44:33.993487 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.008084 5012 scope.go:117] "RemoveContainer" containerID="a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.022691 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.041244 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.053218 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:44:34 crc kubenswrapper[5012]: E0219 05:44:34.053709 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50127c6b-476e-473a-877d-00fd5feb6bb4" containerName="glance-httpd" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.053728 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="50127c6b-476e-473a-877d-00fd5feb6bb4" containerName="glance-httpd" Feb 19 05:44:34 crc kubenswrapper[5012]: E0219 05:44:34.053768 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50127c6b-476e-473a-877d-00fd5feb6bb4" containerName="glance-log" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.053777 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="50127c6b-476e-473a-877d-00fd5feb6bb4" containerName="glance-log" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.055214 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="50127c6b-476e-473a-877d-00fd5feb6bb4" containerName="glance-log" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.055243 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="50127c6b-476e-473a-877d-00fd5feb6bb4" containerName="glance-httpd" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.058310 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.060840 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.061087 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.061256 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.070770 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:34 crc kubenswrapper[5012]: W0219 05:44:34.075628 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b39b6f2_c394_449b_9c41_1b09eabce119.slice/crio-f3390505531dfd48ffffb6923d40092eac0cabd7f0dafe8f2f9410a259a092e8 WatchSource:0}: Error finding container f3390505531dfd48ffffb6923d40092eac0cabd7f0dafe8f2f9410a259a092e8: Status 404 returned error can't find the container with id f3390505531dfd48ffffb6923d40092eac0cabd7f0dafe8f2f9410a259a092e8 Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.088054 5012 scope.go:117] "RemoveContainer" containerID="4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46" Feb 19 05:44:34 crc kubenswrapper[5012]: E0219 05:44:34.089041 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46\": container with ID starting with 4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46 not found: ID does not exist" containerID="4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.097450 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46"} err="failed to get container status \"4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46\": rpc error: code = NotFound desc = could not find container \"4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46\": container with ID starting with 4350c47a91c7eab9c0ce5571b2b0861682336a72f0cd793252e1e04f39d78f46 not found: ID does not exist" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.097523 5012 scope.go:117] "RemoveContainer" containerID="a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5" Feb 19 05:44:34 crc kubenswrapper[5012]: E0219 05:44:34.098343 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5\": container with ID starting with a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5 not found: ID does not exist" containerID="a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.098405 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5"} err="failed to get container status \"a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5\": rpc error: code = NotFound desc = could not find container \"a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5\": container with ID starting with a5023ce7497f24674c3b19007ab66ee22785b775de6400f3d270de29f00b95f5 not found: ID does not exist" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.120777 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzhn2\" (UniqueName: \"kubernetes.io/projected/f55309b7-09e5-4496-8995-f03681386729-kube-api-access-hzhn2\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.120832 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.120868 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f55309b7-09e5-4496-8995-f03681386729-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.120887 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f55309b7-09e5-4496-8995-f03681386729-logs\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.120907 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.120962 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.120993 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.121011 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.223787 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzhn2\" (UniqueName: \"kubernetes.io/projected/f55309b7-09e5-4496-8995-f03681386729-kube-api-access-hzhn2\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.223906 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.223945 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f55309b7-09e5-4496-8995-f03681386729-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.223967 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f55309b7-09e5-4496-8995-f03681386729-logs\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.223990 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.224076 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.224137 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.224157 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.224411 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f55309b7-09e5-4496-8995-f03681386729-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.224659 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f55309b7-09e5-4496-8995-f03681386729-logs\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.224781 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.242604 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.242803 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.243530 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.251582 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55309b7-09e5-4496-8995-f03681386729-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.252034 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzhn2\" (UniqueName: \"kubernetes.io/projected/f55309b7-09e5-4496-8995-f03681386729-kube-api-access-hzhn2\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.265018 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"f55309b7-09e5-4496-8995-f03681386729\") " pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.385279 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.727946 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="236f420e-8855-41f8-8b25-813be7b28799" path="/var/lib/kubelet/pods/236f420e-8855-41f8-8b25-813be7b28799/volumes" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.729206 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50127c6b-476e-473a-877d-00fd5feb6bb4" path="/var/lib/kubelet/pods/50127c6b-476e-473a-877d-00fd5feb6bb4/volumes" Feb 19 05:44:34 crc kubenswrapper[5012]: I0219 05:44:34.730582 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74c05972-714b-4cc7-97f6-d4a2c205eb08" path="/var/lib/kubelet/pods/74c05972-714b-4cc7-97f6-d4a2c205eb08/volumes" Feb 19 05:44:35 crc kubenswrapper[5012]: I0219 05:44:35.012109 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b39b6f2-c394-449b-9c41-1b09eabce119","Type":"ContainerStarted","Data":"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3"} Feb 19 05:44:35 crc kubenswrapper[5012]: I0219 05:44:35.012563 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b39b6f2-c394-449b-9c41-1b09eabce119","Type":"ContainerStarted","Data":"f3390505531dfd48ffffb6923d40092eac0cabd7f0dafe8f2f9410a259a092e8"} Feb 19 05:44:35 crc kubenswrapper[5012]: I0219 05:44:35.036543 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 05:44:35 crc kubenswrapper[5012]: I0219 05:44:35.036973 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8cfddc12-1c4c-4faf-9edb-71fb80608785","Type":"ContainerStarted","Data":"a07a215e56bc0e36f22b891ce490691f5090648c771dc08f0cfc827d1d4d7d16"} Feb 19 05:44:35 crc kubenswrapper[5012]: I0219 05:44:35.037021 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8cfddc12-1c4c-4faf-9edb-71fb80608785","Type":"ContainerStarted","Data":"7a4c697cb6fe382f66c0123db4073f247ee72978d1591752485504ead840944a"} Feb 19 05:44:35 crc kubenswrapper[5012]: I0219 05:44:35.668721 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:36 crc kubenswrapper[5012]: I0219 05:44:36.053265 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8cfddc12-1c4c-4faf-9edb-71fb80608785","Type":"ContainerStarted","Data":"531dcb1099e0ff4ea1b58f4b2eeecebbe921e6e4b7e4593112729de65ca8fada"} Feb 19 05:44:36 crc kubenswrapper[5012]: I0219 05:44:36.055928 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b39b6f2-c394-449b-9c41-1b09eabce119","Type":"ContainerStarted","Data":"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0"} Feb 19 05:44:36 crc kubenswrapper[5012]: I0219 05:44:36.055972 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b39b6f2-c394-449b-9c41-1b09eabce119","Type":"ContainerStarted","Data":"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f"} Feb 19 05:44:36 crc kubenswrapper[5012]: I0219 05:44:36.057940 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f55309b7-09e5-4496-8995-f03681386729","Type":"ContainerStarted","Data":"74a78037986e932b26327ff91893dabb73c43e6fd09d9775961dff6280864fb8"} Feb 19 05:44:36 crc kubenswrapper[5012]: I0219 05:44:36.058048 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f55309b7-09e5-4496-8995-f03681386729","Type":"ContainerStarted","Data":"6d126f6471b267813789f286763a4d78d61062c05bc120621e5ce174d1455fe7"} Feb 19 05:44:37 crc kubenswrapper[5012]: I0219 05:44:37.074196 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f55309b7-09e5-4496-8995-f03681386729","Type":"ContainerStarted","Data":"b0e0db07b2277ab3c92bcf3557604a255cfe117c0f4c45cb962fff603c6fb7d1"} Feb 19 05:44:37 crc kubenswrapper[5012]: I0219 05:44:37.092522 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.092507522 podStartE2EDuration="5.092507522s" podCreationTimestamp="2026-02-19 05:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:36.078714609 +0000 UTC m=+1172.112037178" watchObservedRunningTime="2026-02-19 05:44:37.092507522 +0000 UTC m=+1173.125830091" Feb 19 05:44:37 crc kubenswrapper[5012]: I0219 05:44:37.104160 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.104141996 podStartE2EDuration="4.104141996s" podCreationTimestamp="2026-02-19 05:44:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:37.093554128 +0000 UTC m=+1173.126876697" watchObservedRunningTime="2026-02-19 05:44:37.104141996 +0000 UTC m=+1173.137464565" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.087596 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="ceilometer-central-agent" containerID="cri-o://5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3" gracePeriod=30 Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.087962 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b39b6f2-c394-449b-9c41-1b09eabce119","Type":"ContainerStarted","Data":"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2"} Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.087998 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.088013 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="proxy-httpd" containerID="cri-o://d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2" gracePeriod=30 Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.088142 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="sg-core" containerID="cri-o://d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0" gracePeriod=30 Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.088178 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="ceilometer-notification-agent" containerID="cri-o://6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f" gracePeriod=30 Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.122936 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.152808624 podStartE2EDuration="6.1229164s" podCreationTimestamp="2026-02-19 05:44:32 +0000 UTC" firstStartedPulling="2026-02-19 05:44:34.088065541 +0000 UTC m=+1170.121388110" lastFinishedPulling="2026-02-19 05:44:37.058173317 +0000 UTC m=+1173.091495886" observedRunningTime="2026-02-19 05:44:38.120621775 +0000 UTC m=+1174.153944364" watchObservedRunningTime="2026-02-19 05:44:38.1229164 +0000 UTC m=+1174.156238969" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.533397 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-j7vgh"] Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.534570 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j7vgh" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.553330 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-j7vgh"] Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.630789 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-vj27c"] Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.632040 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vj27c" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.645249 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a4a6-account-create-update-tz4l9"] Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.646537 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a4a6-account-create-update-tz4l9" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.653961 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.659014 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a4a6-account-create-update-tz4l9"] Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.666084 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vj27c"] Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.730182 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsvz7\" (UniqueName: \"kubernetes.io/projected/2fc398d7-f426-420d-981c-6bda415a2ce0-kube-api-access-xsvz7\") pod \"nova-api-db-create-j7vgh\" (UID: \"2fc398d7-f426-420d-981c-6bda415a2ce0\") " pod="openstack/nova-api-db-create-j7vgh" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.730234 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fc398d7-f426-420d-981c-6bda415a2ce0-operator-scripts\") pod \"nova-api-db-create-j7vgh\" (UID: \"2fc398d7-f426-420d-981c-6bda415a2ce0\") " pod="openstack/nova-api-db-create-j7vgh" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.818595 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-9gsgt"] Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.820050 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9gsgt" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.833348 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wchr2\" (UniqueName: \"kubernetes.io/projected/cd4d5a16-81ab-4336-99d5-570d83e4baaa-kube-api-access-wchr2\") pod \"nova-api-a4a6-account-create-update-tz4l9\" (UID: \"cd4d5a16-81ab-4336-99d5-570d83e4baaa\") " pod="openstack/nova-api-a4a6-account-create-update-tz4l9" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.833393 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fc398d7-f426-420d-981c-6bda415a2ce0-operator-scripts\") pod \"nova-api-db-create-j7vgh\" (UID: \"2fc398d7-f426-420d-981c-6bda415a2ce0\") " pod="openstack/nova-api-db-create-j7vgh" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.833540 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd4d5a16-81ab-4336-99d5-570d83e4baaa-operator-scripts\") pod \"nova-api-a4a6-account-create-update-tz4l9\" (UID: \"cd4d5a16-81ab-4336-99d5-570d83e4baaa\") " pod="openstack/nova-api-a4a6-account-create-update-tz4l9" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.833564 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-operator-scripts\") pod \"nova-cell0-db-create-vj27c\" (UID: \"768cc9af-66f9-4972-a2b4-a69b0fb15b3d\") " pod="openstack/nova-cell0-db-create-vj27c" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.833592 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsvz7\" (UniqueName: \"kubernetes.io/projected/2fc398d7-f426-420d-981c-6bda415a2ce0-kube-api-access-xsvz7\") pod \"nova-api-db-create-j7vgh\" (UID: \"2fc398d7-f426-420d-981c-6bda415a2ce0\") " pod="openstack/nova-api-db-create-j7vgh" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.833624 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hn7v\" (UniqueName: \"kubernetes.io/projected/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-kube-api-access-6hn7v\") pod \"nova-cell0-db-create-vj27c\" (UID: \"768cc9af-66f9-4972-a2b4-a69b0fb15b3d\") " pod="openstack/nova-cell0-db-create-vj27c" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.834269 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fc398d7-f426-420d-981c-6bda415a2ce0-operator-scripts\") pod \"nova-api-db-create-j7vgh\" (UID: \"2fc398d7-f426-420d-981c-6bda415a2ce0\") " pod="openstack/nova-api-db-create-j7vgh" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.834690 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-b3d3-account-create-update-jv5jh"] Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.835982 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.838035 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.854358 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b3d3-account-create-update-jv5jh"] Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.864130 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsvz7\" (UniqueName: \"kubernetes.io/projected/2fc398d7-f426-420d-981c-6bda415a2ce0-kube-api-access-xsvz7\") pod \"nova-api-db-create-j7vgh\" (UID: \"2fc398d7-f426-420d-981c-6bda415a2ce0\") " pod="openstack/nova-api-db-create-j7vgh" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.866034 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9gsgt"] Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.894023 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j7vgh" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.935700 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-operator-scripts\") pod \"nova-cell1-db-create-9gsgt\" (UID: \"efae98df-8f23-4e6b-bad0-f2c7a58fb86d\") " pod="openstack/nova-cell1-db-create-9gsgt" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.935773 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hs9f\" (UniqueName: \"kubernetes.io/projected/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-kube-api-access-5hs9f\") pod \"nova-cell1-db-create-9gsgt\" (UID: \"efae98df-8f23-4e6b-bad0-f2c7a58fb86d\") " pod="openstack/nova-cell1-db-create-9gsgt" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.935863 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd4d5a16-81ab-4336-99d5-570d83e4baaa-operator-scripts\") pod \"nova-api-a4a6-account-create-update-tz4l9\" (UID: \"cd4d5a16-81ab-4336-99d5-570d83e4baaa\") " pod="openstack/nova-api-a4a6-account-create-update-tz4l9" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.935887 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-operator-scripts\") pod \"nova-cell0-db-create-vj27c\" (UID: \"768cc9af-66f9-4972-a2b4-a69b0fb15b3d\") " pod="openstack/nova-cell0-db-create-vj27c" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.935919 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hn7v\" (UniqueName: \"kubernetes.io/projected/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-kube-api-access-6hn7v\") pod \"nova-cell0-db-create-vj27c\" (UID: \"768cc9af-66f9-4972-a2b4-a69b0fb15b3d\") " pod="openstack/nova-cell0-db-create-vj27c" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.935948 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wchr2\" (UniqueName: \"kubernetes.io/projected/cd4d5a16-81ab-4336-99d5-570d83e4baaa-kube-api-access-wchr2\") pod \"nova-api-a4a6-account-create-update-tz4l9\" (UID: \"cd4d5a16-81ab-4336-99d5-570d83e4baaa\") " pod="openstack/nova-api-a4a6-account-create-update-tz4l9" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.936775 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-operator-scripts\") pod \"nova-cell0-db-create-vj27c\" (UID: \"768cc9af-66f9-4972-a2b4-a69b0fb15b3d\") " pod="openstack/nova-cell0-db-create-vj27c" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.936777 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd4d5a16-81ab-4336-99d5-570d83e4baaa-operator-scripts\") pod \"nova-api-a4a6-account-create-update-tz4l9\" (UID: \"cd4d5a16-81ab-4336-99d5-570d83e4baaa\") " pod="openstack/nova-api-a4a6-account-create-update-tz4l9" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.961353 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hn7v\" (UniqueName: \"kubernetes.io/projected/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-kube-api-access-6hn7v\") pod \"nova-cell0-db-create-vj27c\" (UID: \"768cc9af-66f9-4972-a2b4-a69b0fb15b3d\") " pod="openstack/nova-cell0-db-create-vj27c" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.977015 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vj27c" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.978931 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wchr2\" (UniqueName: \"kubernetes.io/projected/cd4d5a16-81ab-4336-99d5-570d83e4baaa-kube-api-access-wchr2\") pod \"nova-api-a4a6-account-create-update-tz4l9\" (UID: \"cd4d5a16-81ab-4336-99d5-570d83e4baaa\") " pod="openstack/nova-api-a4a6-account-create-update-tz4l9" Feb 19 05:44:38 crc kubenswrapper[5012]: I0219 05:44:38.994064 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a4a6-account-create-update-tz4l9" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.044895 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwbw6\" (UniqueName: \"kubernetes.io/projected/0b1a4d80-a736-41c3-9157-c0a696c10eff-kube-api-access-bwbw6\") pod \"nova-cell0-b3d3-account-create-update-jv5jh\" (UID: \"0b1a4d80-a736-41c3-9157-c0a696c10eff\") " pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.048484 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-operator-scripts\") pod \"nova-cell1-db-create-9gsgt\" (UID: \"efae98df-8f23-4e6b-bad0-f2c7a58fb86d\") " pod="openstack/nova-cell1-db-create-9gsgt" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.048748 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hs9f\" (UniqueName: \"kubernetes.io/projected/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-kube-api-access-5hs9f\") pod \"nova-cell1-db-create-9gsgt\" (UID: \"efae98df-8f23-4e6b-bad0-f2c7a58fb86d\") " pod="openstack/nova-cell1-db-create-9gsgt" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.048825 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b1a4d80-a736-41c3-9157-c0a696c10eff-operator-scripts\") pod \"nova-cell0-b3d3-account-create-update-jv5jh\" (UID: \"0b1a4d80-a736-41c3-9157-c0a696c10eff\") " pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.055867 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-fc68-account-create-update-tfrzr"] Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.058685 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.058994 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-operator-scripts\") pod \"nova-cell1-db-create-9gsgt\" (UID: \"efae98df-8f23-4e6b-bad0-f2c7a58fb86d\") " pod="openstack/nova-cell1-db-create-9gsgt" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.060608 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.070507 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hs9f\" (UniqueName: \"kubernetes.io/projected/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-kube-api-access-5hs9f\") pod \"nova-cell1-db-create-9gsgt\" (UID: \"efae98df-8f23-4e6b-bad0-f2c7a58fb86d\") " pod="openstack/nova-cell1-db-create-9gsgt" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.095781 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.114854 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-fc68-account-create-update-tfrzr"] Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132389 5012 generic.go:334] "Generic (PLEG): container finished" podID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerID="d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2" exitCode=0 Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132437 5012 generic.go:334] "Generic (PLEG): container finished" podID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerID="d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0" exitCode=2 Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132446 5012 generic.go:334] "Generic (PLEG): container finished" podID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerID="6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f" exitCode=0 Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132456 5012 generic.go:334] "Generic (PLEG): container finished" podID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerID="5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3" exitCode=0 Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132487 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b39b6f2-c394-449b-9c41-1b09eabce119","Type":"ContainerDied","Data":"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2"} Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132525 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b39b6f2-c394-449b-9c41-1b09eabce119","Type":"ContainerDied","Data":"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0"} Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132540 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b39b6f2-c394-449b-9c41-1b09eabce119","Type":"ContainerDied","Data":"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f"} Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132549 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b39b6f2-c394-449b-9c41-1b09eabce119","Type":"ContainerDied","Data":"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3"} Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132558 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b39b6f2-c394-449b-9c41-1b09eabce119","Type":"ContainerDied","Data":"f3390505531dfd48ffffb6923d40092eac0cabd7f0dafe8f2f9410a259a092e8"} Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132577 5012 scope.go:117] "RemoveContainer" containerID="d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.132761 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.150846 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwbw6\" (UniqueName: \"kubernetes.io/projected/0b1a4d80-a736-41c3-9157-c0a696c10eff-kube-api-access-bwbw6\") pod \"nova-cell0-b3d3-account-create-update-jv5jh\" (UID: \"0b1a4d80-a736-41c3-9157-c0a696c10eff\") " pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.150975 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b1a4d80-a736-41c3-9157-c0a696c10eff-operator-scripts\") pod \"nova-cell0-b3d3-account-create-update-jv5jh\" (UID: \"0b1a4d80-a736-41c3-9157-c0a696c10eff\") " pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.152156 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b1a4d80-a736-41c3-9157-c0a696c10eff-operator-scripts\") pod \"nova-cell0-b3d3-account-create-update-jv5jh\" (UID: \"0b1a4d80-a736-41c3-9157-c0a696c10eff\") " pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.156935 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9gsgt" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.171717 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwbw6\" (UniqueName: \"kubernetes.io/projected/0b1a4d80-a736-41c3-9157-c0a696c10eff-kube-api-access-bwbw6\") pod \"nova-cell0-b3d3-account-create-update-jv5jh\" (UID: \"0b1a4d80-a736-41c3-9157-c0a696c10eff\") " pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.220266 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.256036 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9h47\" (UniqueName: \"kubernetes.io/projected/1b39b6f2-c394-449b-9c41-1b09eabce119-kube-api-access-m9h47\") pod \"1b39b6f2-c394-449b-9c41-1b09eabce119\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.256249 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-run-httpd\") pod \"1b39b6f2-c394-449b-9c41-1b09eabce119\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.256287 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-scripts\") pod \"1b39b6f2-c394-449b-9c41-1b09eabce119\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.256323 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-log-httpd\") pod \"1b39b6f2-c394-449b-9c41-1b09eabce119\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.256370 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-sg-core-conf-yaml\") pod \"1b39b6f2-c394-449b-9c41-1b09eabce119\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.256391 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-config-data\") pod \"1b39b6f2-c394-449b-9c41-1b09eabce119\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.256419 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-combined-ca-bundle\") pod \"1b39b6f2-c394-449b-9c41-1b09eabce119\" (UID: \"1b39b6f2-c394-449b-9c41-1b09eabce119\") " Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.256693 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80e98ac0-3018-4566-95b3-2d2dfd3e234e-operator-scripts\") pod \"nova-cell1-fc68-account-create-update-tfrzr\" (UID: \"80e98ac0-3018-4566-95b3-2d2dfd3e234e\") " pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.256754 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5svrw\" (UniqueName: \"kubernetes.io/projected/80e98ac0-3018-4566-95b3-2d2dfd3e234e-kube-api-access-5svrw\") pod \"nova-cell1-fc68-account-create-update-tfrzr\" (UID: \"80e98ac0-3018-4566-95b3-2d2dfd3e234e\") " pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.258014 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1b39b6f2-c394-449b-9c41-1b09eabce119" (UID: "1b39b6f2-c394-449b-9c41-1b09eabce119"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.260828 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1b39b6f2-c394-449b-9c41-1b09eabce119" (UID: "1b39b6f2-c394-449b-9c41-1b09eabce119"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.265766 5012 scope.go:117] "RemoveContainer" containerID="d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.265939 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-scripts" (OuterVolumeSpecName: "scripts") pod "1b39b6f2-c394-449b-9c41-1b09eabce119" (UID: "1b39b6f2-c394-449b-9c41-1b09eabce119"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.271017 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b39b6f2-c394-449b-9c41-1b09eabce119-kube-api-access-m9h47" (OuterVolumeSpecName: "kube-api-access-m9h47") pod "1b39b6f2-c394-449b-9c41-1b09eabce119" (UID: "1b39b6f2-c394-449b-9c41-1b09eabce119"). InnerVolumeSpecName "kube-api-access-m9h47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.304448 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1b39b6f2-c394-449b-9c41-1b09eabce119" (UID: "1b39b6f2-c394-449b-9c41-1b09eabce119"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.359293 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80e98ac0-3018-4566-95b3-2d2dfd3e234e-operator-scripts\") pod \"nova-cell1-fc68-account-create-update-tfrzr\" (UID: \"80e98ac0-3018-4566-95b3-2d2dfd3e234e\") " pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.359382 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5svrw\" (UniqueName: \"kubernetes.io/projected/80e98ac0-3018-4566-95b3-2d2dfd3e234e-kube-api-access-5svrw\") pod \"nova-cell1-fc68-account-create-update-tfrzr\" (UID: \"80e98ac0-3018-4566-95b3-2d2dfd3e234e\") " pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.359502 5012 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.359513 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.359522 5012 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b39b6f2-c394-449b-9c41-1b09eabce119-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.359531 5012 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.359541 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9h47\" (UniqueName: \"kubernetes.io/projected/1b39b6f2-c394-449b-9c41-1b09eabce119-kube-api-access-m9h47\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.362682 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80e98ac0-3018-4566-95b3-2d2dfd3e234e-operator-scripts\") pod \"nova-cell1-fc68-account-create-update-tfrzr\" (UID: \"80e98ac0-3018-4566-95b3-2d2dfd3e234e\") " pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.378801 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5svrw\" (UniqueName: \"kubernetes.io/projected/80e98ac0-3018-4566-95b3-2d2dfd3e234e-kube-api-access-5svrw\") pod \"nova-cell1-fc68-account-create-update-tfrzr\" (UID: \"80e98ac0-3018-4566-95b3-2d2dfd3e234e\") " pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.399860 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b39b6f2-c394-449b-9c41-1b09eabce119" (UID: "1b39b6f2-c394-449b-9c41-1b09eabce119"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.412465 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-config-data" (OuterVolumeSpecName: "config-data") pod "1b39b6f2-c394-449b-9c41-1b09eabce119" (UID: "1b39b6f2-c394-449b-9c41-1b09eabce119"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.450799 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.462906 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.462945 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b39b6f2-c394-449b-9c41-1b09eabce119-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.528830 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.536331 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.572426 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:39 crc kubenswrapper[5012]: E0219 05:44:39.572806 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="proxy-httpd" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.572822 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="proxy-httpd" Feb 19 05:44:39 crc kubenswrapper[5012]: E0219 05:44:39.572841 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="sg-core" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.572847 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="sg-core" Feb 19 05:44:39 crc kubenswrapper[5012]: E0219 05:44:39.572866 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="ceilometer-central-agent" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.572872 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="ceilometer-central-agent" Feb 19 05:44:39 crc kubenswrapper[5012]: E0219 05:44:39.572882 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="ceilometer-notification-agent" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.572888 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="ceilometer-notification-agent" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.573052 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="proxy-httpd" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.573068 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="sg-core" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.573085 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="ceilometer-notification-agent" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.573100 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" containerName="ceilometer-central-agent" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.580486 5012 scope.go:117] "RemoveContainer" containerID="6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.586253 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.594764 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.594960 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.625379 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-j7vgh"] Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.677647 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:39 crc kubenswrapper[5012]: W0219 05:44:39.724447 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fc398d7_f426_420d_981c_6bda415a2ce0.slice/crio-b5f8ebe3d33920b34ec3478c99b9fa3b4b46afe5cb6d104ea5353e6955b7bf88 WatchSource:0}: Error finding container b5f8ebe3d33920b34ec3478c99b9fa3b4b46afe5cb6d104ea5353e6955b7bf88: Status 404 returned error can't find the container with id b5f8ebe3d33920b34ec3478c99b9fa3b4b46afe5cb6d104ea5353e6955b7bf88 Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.737038 5012 scope.go:117] "RemoveContainer" containerID="5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.764563 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a4a6-account-create-update-tz4l9"] Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.778885 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.778932 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-scripts\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.778963 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-log-httpd\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.779557 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.779588 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-config-data\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.779615 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wfvd\" (UniqueName: \"kubernetes.io/projected/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-kube-api-access-2wfvd\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.779651 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-run-httpd\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.784533 5012 scope.go:117] "RemoveContainer" containerID="d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2" Feb 19 05:44:39 crc kubenswrapper[5012]: E0219 05:44:39.787455 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2\": container with ID starting with d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2 not found: ID does not exist" containerID="d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.787504 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2"} err="failed to get container status \"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2\": rpc error: code = NotFound desc = could not find container \"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2\": container with ID starting with d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.787532 5012 scope.go:117] "RemoveContainer" containerID="d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0" Feb 19 05:44:39 crc kubenswrapper[5012]: E0219 05:44:39.788006 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0\": container with ID starting with d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0 not found: ID does not exist" containerID="d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.788044 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0"} err="failed to get container status \"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0\": rpc error: code = NotFound desc = could not find container \"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0\": container with ID starting with d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.788071 5012 scope.go:117] "RemoveContainer" containerID="6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f" Feb 19 05:44:39 crc kubenswrapper[5012]: E0219 05:44:39.788325 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f\": container with ID starting with 6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f not found: ID does not exist" containerID="6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.788350 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f"} err="failed to get container status \"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f\": rpc error: code = NotFound desc = could not find container \"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f\": container with ID starting with 6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.788364 5012 scope.go:117] "RemoveContainer" containerID="5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3" Feb 19 05:44:39 crc kubenswrapper[5012]: E0219 05:44:39.792959 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3\": container with ID starting with 5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3 not found: ID does not exist" containerID="5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.793472 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3"} err="failed to get container status \"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3\": rpc error: code = NotFound desc = could not find container \"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3\": container with ID starting with 5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.793490 5012 scope.go:117] "RemoveContainer" containerID="d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.798354 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vj27c"] Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.812803 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2"} err="failed to get container status \"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2\": rpc error: code = NotFound desc = could not find container \"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2\": container with ID starting with d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.812846 5012 scope.go:117] "RemoveContainer" containerID="d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.813558 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0"} err="failed to get container status \"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0\": rpc error: code = NotFound desc = could not find container \"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0\": container with ID starting with d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.813587 5012 scope.go:117] "RemoveContainer" containerID="6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.814426 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f"} err="failed to get container status \"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f\": rpc error: code = NotFound desc = could not find container \"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f\": container with ID starting with 6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.814459 5012 scope.go:117] "RemoveContainer" containerID="5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.814881 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3"} err="failed to get container status \"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3\": rpc error: code = NotFound desc = could not find container \"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3\": container with ID starting with 5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.814894 5012 scope.go:117] "RemoveContainer" containerID="d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.815205 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2"} err="failed to get container status \"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2\": rpc error: code = NotFound desc = could not find container \"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2\": container with ID starting with d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.815222 5012 scope.go:117] "RemoveContainer" containerID="d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.819472 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0"} err="failed to get container status \"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0\": rpc error: code = NotFound desc = could not find container \"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0\": container with ID starting with d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.819507 5012 scope.go:117] "RemoveContainer" containerID="6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.826161 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f"} err="failed to get container status \"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f\": rpc error: code = NotFound desc = could not find container \"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f\": container with ID starting with 6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.826204 5012 scope.go:117] "RemoveContainer" containerID="5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.828743 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3"} err="failed to get container status \"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3\": rpc error: code = NotFound desc = could not find container \"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3\": container with ID starting with 5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.828763 5012 scope.go:117] "RemoveContainer" containerID="d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.830030 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2"} err="failed to get container status \"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2\": rpc error: code = NotFound desc = could not find container \"d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2\": container with ID starting with d81e06ee10f0ec1a17dda1438f4d2658cbc90962dfd413857233d98f00aecaf2 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.830072 5012 scope.go:117] "RemoveContainer" containerID="d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.830507 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0"} err="failed to get container status \"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0\": rpc error: code = NotFound desc = could not find container \"d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0\": container with ID starting with d21f4ee1f30d556f07b58c78d25cffb6501e2d1c60e63d163776f5c844fa17d0 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.830553 5012 scope.go:117] "RemoveContainer" containerID="6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.831681 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f"} err="failed to get container status \"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f\": rpc error: code = NotFound desc = could not find container \"6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f\": container with ID starting with 6cf020db13be994db92654d0f838129cfc4ca7ee2357a806adc1367ebeea7f5f not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.831720 5012 scope.go:117] "RemoveContainer" containerID="5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.832421 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3"} err="failed to get container status \"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3\": rpc error: code = NotFound desc = could not find container \"5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3\": container with ID starting with 5c6b6c4a6370086fe18aba98f087569528ff0d6acb087d1f4c43ce3e52a192e3 not found: ID does not exist" Feb 19 05:44:39 crc kubenswrapper[5012]: W0219 05:44:39.836537 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod768cc9af_66f9_4972_a2b4_a69b0fb15b3d.slice/crio-f1474b2df1783edf7e36b069e0ed66b3d3ea5512e978fe4795cb650c4c998b7c WatchSource:0}: Error finding container f1474b2df1783edf7e36b069e0ed66b3d3ea5512e978fe4795cb650c4c998b7c: Status 404 returned error can't find the container with id f1474b2df1783edf7e36b069e0ed66b3d3ea5512e978fe4795cb650c4c998b7c Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.881785 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-log-httpd\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.882080 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-log-httpd\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.882167 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.882224 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-config-data\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.882274 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wfvd\" (UniqueName: \"kubernetes.io/projected/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-kube-api-access-2wfvd\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.882383 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-run-httpd\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.882431 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.882541 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-scripts\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.884731 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-run-httpd\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.893413 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.894986 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-config-data\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.896373 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-scripts\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.897532 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.906121 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wfvd\" (UniqueName: \"kubernetes.io/projected/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-kube-api-access-2wfvd\") pod \"ceilometer-0\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " pod="openstack/ceilometer-0" Feb 19 05:44:39 crc kubenswrapper[5012]: I0219 05:44:39.974421 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5ff88b6c7c-5bg66" Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.027240 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.040321 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b3d3-account-create-update-jv5jh"] Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.048580 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-77b847d784-sfqqm"] Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.048964 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-77b847d784-sfqqm" podUID="20fc844f-415a-4c39-b2ac-966ff2a43a43" containerName="neutron-api" containerID="cri-o://9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a" gracePeriod=30 Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.049422 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-77b847d784-sfqqm" podUID="20fc844f-415a-4c39-b2ac-966ff2a43a43" containerName="neutron-httpd" containerID="cri-o://6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f" gracePeriod=30 Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.189519 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" event={"ID":"0b1a4d80-a736-41c3-9157-c0a696c10eff","Type":"ContainerStarted","Data":"39e7d198d33d90bf562a2f8d83ddbd204e351ae8b0c5bc3ebf6ffd290d67ecd5"} Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.203155 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vj27c" event={"ID":"768cc9af-66f9-4972-a2b4-a69b0fb15b3d","Type":"ContainerStarted","Data":"f1474b2df1783edf7e36b069e0ed66b3d3ea5512e978fe4795cb650c4c998b7c"} Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.203952 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-fc68-account-create-update-tfrzr"] Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.213043 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a4a6-account-create-update-tz4l9" event={"ID":"cd4d5a16-81ab-4336-99d5-570d83e4baaa","Type":"ContainerStarted","Data":"6bb09a0e711ff60e20706cda170848a83267740db4bd8dbf04824ff14d1736e8"} Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.215641 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-j7vgh" event={"ID":"2fc398d7-f426-420d-981c-6bda415a2ce0","Type":"ContainerStarted","Data":"43cb426b1d824281e78b0291231050744f408cc09f73ab56e4ae893d291e9f7e"} Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.215679 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-j7vgh" event={"ID":"2fc398d7-f426-420d-981c-6bda415a2ce0","Type":"ContainerStarted","Data":"b5f8ebe3d33920b34ec3478c99b9fa3b4b46afe5cb6d104ea5353e6955b7bf88"} Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.230969 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9gsgt"] Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.621290 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-j7vgh" podStartSLOduration=2.6212583240000003 podStartE2EDuration="2.621258324s" podCreationTimestamp="2026-02-19 05:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:44:40.232562434 +0000 UTC m=+1176.265885003" watchObservedRunningTime="2026-02-19 05:44:40.621258324 +0000 UTC m=+1176.654580893" Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.625406 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:40 crc kubenswrapper[5012]: W0219 05:44:40.640828 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2e6ffe3_5533_459b_989b_e04f94b8f8ba.slice/crio-dd27232efe40574ec6d4be8487ee757105954e66ecc2b8f597ac24a25d2b5f76 WatchSource:0}: Error finding container dd27232efe40574ec6d4be8487ee757105954e66ecc2b8f597ac24a25d2b5f76: Status 404 returned error can't find the container with id dd27232efe40574ec6d4be8487ee757105954e66ecc2b8f597ac24a25d2b5f76 Feb 19 05:44:40 crc kubenswrapper[5012]: I0219 05:44:40.725412 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b39b6f2-c394-449b-9c41-1b09eabce119" path="/var/lib/kubelet/pods/1b39b6f2-c394-449b-9c41-1b09eabce119/volumes" Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.228412 5012 generic.go:334] "Generic (PLEG): container finished" podID="0b1a4d80-a736-41c3-9157-c0a696c10eff" containerID="db7dcb78edaee0fd0bebd3da354f9bac23c709f6cc5c4054736fd0aaea637cae" exitCode=0 Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.228797 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" event={"ID":"0b1a4d80-a736-41c3-9157-c0a696c10eff","Type":"ContainerDied","Data":"db7dcb78edaee0fd0bebd3da354f9bac23c709f6cc5c4054736fd0aaea637cae"} Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.232545 5012 generic.go:334] "Generic (PLEG): container finished" podID="768cc9af-66f9-4972-a2b4-a69b0fb15b3d" containerID="32216c19b01878e03cf37157a913c36ac04ed37d7c518d5811bfc0096e2fc84b" exitCode=0 Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.232599 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vj27c" event={"ID":"768cc9af-66f9-4972-a2b4-a69b0fb15b3d","Type":"ContainerDied","Data":"32216c19b01878e03cf37157a913c36ac04ed37d7c518d5811bfc0096e2fc84b"} Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.239065 5012 generic.go:334] "Generic (PLEG): container finished" podID="cd4d5a16-81ab-4336-99d5-570d83e4baaa" containerID="48110d1bed52f125950a67152ee45f991adbabd56a8a45d17e8316bb03423870" exitCode=0 Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.239138 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a4a6-account-create-update-tz4l9" event={"ID":"cd4d5a16-81ab-4336-99d5-570d83e4baaa","Type":"ContainerDied","Data":"48110d1bed52f125950a67152ee45f991adbabd56a8a45d17e8316bb03423870"} Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.244564 5012 generic.go:334] "Generic (PLEG): container finished" podID="20fc844f-415a-4c39-b2ac-966ff2a43a43" containerID="6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f" exitCode=0 Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.244658 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77b847d784-sfqqm" event={"ID":"20fc844f-415a-4c39-b2ac-966ff2a43a43","Type":"ContainerDied","Data":"6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f"} Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.248433 5012 generic.go:334] "Generic (PLEG): container finished" podID="80e98ac0-3018-4566-95b3-2d2dfd3e234e" containerID="2adf806f0d4859a0678f70c1d3e40183b96910ec8d7d4b4dd3e550a8e559d848" exitCode=0 Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.248602 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" event={"ID":"80e98ac0-3018-4566-95b3-2d2dfd3e234e","Type":"ContainerDied","Data":"2adf806f0d4859a0678f70c1d3e40183b96910ec8d7d4b4dd3e550a8e559d848"} Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.248688 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" event={"ID":"80e98ac0-3018-4566-95b3-2d2dfd3e234e","Type":"ContainerStarted","Data":"fc9e0b36ba215a7b15a043e893474e804ffe85a7b980f7c6bfcf172eebfcba2c"} Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.250369 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2e6ffe3-5533-459b-989b-e04f94b8f8ba","Type":"ContainerStarted","Data":"34a399338c013b61152c60fcd0046303ede4ee51c443dfcf2a65805c9c44defe"} Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.250402 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2e6ffe3-5533-459b-989b-e04f94b8f8ba","Type":"ContainerStarted","Data":"dd27232efe40574ec6d4be8487ee757105954e66ecc2b8f597ac24a25d2b5f76"} Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.256555 5012 generic.go:334] "Generic (PLEG): container finished" podID="2fc398d7-f426-420d-981c-6bda415a2ce0" containerID="43cb426b1d824281e78b0291231050744f408cc09f73ab56e4ae893d291e9f7e" exitCode=0 Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.256732 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-j7vgh" event={"ID":"2fc398d7-f426-420d-981c-6bda415a2ce0","Type":"ContainerDied","Data":"43cb426b1d824281e78b0291231050744f408cc09f73ab56e4ae893d291e9f7e"} Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.278416 5012 generic.go:334] "Generic (PLEG): container finished" podID="efae98df-8f23-4e6b-bad0-f2c7a58fb86d" containerID="7b6605dba53e000181057a053dcadb95742b096a45a5fa3c7a87f8e866bb1bd9" exitCode=0 Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.278500 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9gsgt" event={"ID":"efae98df-8f23-4e6b-bad0-f2c7a58fb86d","Type":"ContainerDied","Data":"7b6605dba53e000181057a053dcadb95742b096a45a5fa3c7a87f8e866bb1bd9"} Feb 19 05:44:41 crc kubenswrapper[5012]: I0219 05:44:41.278532 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9gsgt" event={"ID":"efae98df-8f23-4e6b-bad0-f2c7a58fb86d","Type":"ContainerStarted","Data":"22014db8dc001ca072a53bc19b4abaaf826ac34b43c61bc952f4be5c2e88203d"} Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.291379 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2e6ffe3-5533-459b-989b-e04f94b8f8ba","Type":"ContainerStarted","Data":"cb200dd76cd661f7ff34b71bfb488f08698c2c8969d0994a64b2d1b69bb789ec"} Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.291794 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2e6ffe3-5533-459b-989b-e04f94b8f8ba","Type":"ContainerStarted","Data":"4b17f7e35bacf75c95fd5af2ce831c9268ee336939f6e0582d263b98f40338b3"} Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.625912 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.759469 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5svrw\" (UniqueName: \"kubernetes.io/projected/80e98ac0-3018-4566-95b3-2d2dfd3e234e-kube-api-access-5svrw\") pod \"80e98ac0-3018-4566-95b3-2d2dfd3e234e\" (UID: \"80e98ac0-3018-4566-95b3-2d2dfd3e234e\") " Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.759572 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80e98ac0-3018-4566-95b3-2d2dfd3e234e-operator-scripts\") pod \"80e98ac0-3018-4566-95b3-2d2dfd3e234e\" (UID: \"80e98ac0-3018-4566-95b3-2d2dfd3e234e\") " Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.765831 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80e98ac0-3018-4566-95b3-2d2dfd3e234e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "80e98ac0-3018-4566-95b3-2d2dfd3e234e" (UID: "80e98ac0-3018-4566-95b3-2d2dfd3e234e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.768415 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80e98ac0-3018-4566-95b3-2d2dfd3e234e-kube-api-access-5svrw" (OuterVolumeSpecName: "kube-api-access-5svrw") pod "80e98ac0-3018-4566-95b3-2d2dfd3e234e" (UID: "80e98ac0-3018-4566-95b3-2d2dfd3e234e"). InnerVolumeSpecName "kube-api-access-5svrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.844950 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.862096 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5svrw\" (UniqueName: \"kubernetes.io/projected/80e98ac0-3018-4566-95b3-2d2dfd3e234e-kube-api-access-5svrw\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.862125 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80e98ac0-3018-4566-95b3-2d2dfd3e234e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.964438 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwbw6\" (UniqueName: \"kubernetes.io/projected/0b1a4d80-a736-41c3-9157-c0a696c10eff-kube-api-access-bwbw6\") pod \"0b1a4d80-a736-41c3-9157-c0a696c10eff\" (UID: \"0b1a4d80-a736-41c3-9157-c0a696c10eff\") " Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.964885 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b1a4d80-a736-41c3-9157-c0a696c10eff-operator-scripts\") pod \"0b1a4d80-a736-41c3-9157-c0a696c10eff\" (UID: \"0b1a4d80-a736-41c3-9157-c0a696c10eff\") " Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.965961 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b1a4d80-a736-41c3-9157-c0a696c10eff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b1a4d80-a736-41c3-9157-c0a696c10eff" (UID: "0b1a4d80-a736-41c3-9157-c0a696c10eff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:42 crc kubenswrapper[5012]: I0219 05:44:42.975211 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b1a4d80-a736-41c3-9157-c0a696c10eff-kube-api-access-bwbw6" (OuterVolumeSpecName: "kube-api-access-bwbw6") pod "0b1a4d80-a736-41c3-9157-c0a696c10eff" (UID: "0b1a4d80-a736-41c3-9157-c0a696c10eff"). InnerVolumeSpecName "kube-api-access-bwbw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.048598 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j7vgh" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.067644 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwbw6\" (UniqueName: \"kubernetes.io/projected/0b1a4d80-a736-41c3-9157-c0a696c10eff-kube-api-access-bwbw6\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.067675 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b1a4d80-a736-41c3-9157-c0a696c10eff-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.111005 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9gsgt" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.115072 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a4a6-account-create-update-tz4l9" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.160352 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vj27c" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.170899 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.171464 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fc398d7-f426-420d-981c-6bda415a2ce0-operator-scripts\") pod \"2fc398d7-f426-420d-981c-6bda415a2ce0\" (UID: \"2fc398d7-f426-420d-981c-6bda415a2ce0\") " Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.171579 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-operator-scripts\") pod \"efae98df-8f23-4e6b-bad0-f2c7a58fb86d\" (UID: \"efae98df-8f23-4e6b-bad0-f2c7a58fb86d\") " Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.171660 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hs9f\" (UniqueName: \"kubernetes.io/projected/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-kube-api-access-5hs9f\") pod \"efae98df-8f23-4e6b-bad0-f2c7a58fb86d\" (UID: \"efae98df-8f23-4e6b-bad0-f2c7a58fb86d\") " Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.171688 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsvz7\" (UniqueName: \"kubernetes.io/projected/2fc398d7-f426-420d-981c-6bda415a2ce0-kube-api-access-xsvz7\") pod \"2fc398d7-f426-420d-981c-6bda415a2ce0\" (UID: \"2fc398d7-f426-420d-981c-6bda415a2ce0\") " Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.171746 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd4d5a16-81ab-4336-99d5-570d83e4baaa-operator-scripts\") pod \"cd4d5a16-81ab-4336-99d5-570d83e4baaa\" (UID: \"cd4d5a16-81ab-4336-99d5-570d83e4baaa\") " Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.171770 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wchr2\" (UniqueName: \"kubernetes.io/projected/cd4d5a16-81ab-4336-99d5-570d83e4baaa-kube-api-access-wchr2\") pod \"cd4d5a16-81ab-4336-99d5-570d83e4baaa\" (UID: \"cd4d5a16-81ab-4336-99d5-570d83e4baaa\") " Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.175806 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fc398d7-f426-420d-981c-6bda415a2ce0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2fc398d7-f426-420d-981c-6bda415a2ce0" (UID: "2fc398d7-f426-420d-981c-6bda415a2ce0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.176291 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "efae98df-8f23-4e6b-bad0-f2c7a58fb86d" (UID: "efae98df-8f23-4e6b-bad0-f2c7a58fb86d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.180070 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd4d5a16-81ab-4336-99d5-570d83e4baaa-kube-api-access-wchr2" (OuterVolumeSpecName: "kube-api-access-wchr2") pod "cd4d5a16-81ab-4336-99d5-570d83e4baaa" (UID: "cd4d5a16-81ab-4336-99d5-570d83e4baaa"). InnerVolumeSpecName "kube-api-access-wchr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.185545 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-kube-api-access-5hs9f" (OuterVolumeSpecName: "kube-api-access-5hs9f") pod "efae98df-8f23-4e6b-bad0-f2c7a58fb86d" (UID: "efae98df-8f23-4e6b-bad0-f2c7a58fb86d"). InnerVolumeSpecName "kube-api-access-5hs9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.188680 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fc398d7-f426-420d-981c-6bda415a2ce0-kube-api-access-xsvz7" (OuterVolumeSpecName: "kube-api-access-xsvz7") pod "2fc398d7-f426-420d-981c-6bda415a2ce0" (UID: "2fc398d7-f426-420d-981c-6bda415a2ce0"). InnerVolumeSpecName "kube-api-access-xsvz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.192133 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd4d5a16-81ab-4336-99d5-570d83e4baaa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cd4d5a16-81ab-4336-99d5-570d83e4baaa" (UID: "cd4d5a16-81ab-4336-99d5-570d83e4baaa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.273202 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hn7v\" (UniqueName: \"kubernetes.io/projected/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-kube-api-access-6hn7v\") pod \"768cc9af-66f9-4972-a2b4-a69b0fb15b3d\" (UID: \"768cc9af-66f9-4972-a2b4-a69b0fb15b3d\") " Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.273383 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-operator-scripts\") pod \"768cc9af-66f9-4972-a2b4-a69b0fb15b3d\" (UID: \"768cc9af-66f9-4972-a2b4-a69b0fb15b3d\") " Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.273864 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wchr2\" (UniqueName: \"kubernetes.io/projected/cd4d5a16-81ab-4336-99d5-570d83e4baaa-kube-api-access-wchr2\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.273898 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fc398d7-f426-420d-981c-6bda415a2ce0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.273907 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.273917 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hs9f\" (UniqueName: \"kubernetes.io/projected/efae98df-8f23-4e6b-bad0-f2c7a58fb86d-kube-api-access-5hs9f\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.273925 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsvz7\" (UniqueName: \"kubernetes.io/projected/2fc398d7-f426-420d-981c-6bda415a2ce0-kube-api-access-xsvz7\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.273933 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd4d5a16-81ab-4336-99d5-570d83e4baaa-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.274295 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "768cc9af-66f9-4972-a2b4-a69b0fb15b3d" (UID: "768cc9af-66f9-4972-a2b4-a69b0fb15b3d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.276809 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-kube-api-access-6hn7v" (OuterVolumeSpecName: "kube-api-access-6hn7v") pod "768cc9af-66f9-4972-a2b4-a69b0fb15b3d" (UID: "768cc9af-66f9-4972-a2b4-a69b0fb15b3d"). InnerVolumeSpecName "kube-api-access-6hn7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.304345 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a4a6-account-create-update-tz4l9" event={"ID":"cd4d5a16-81ab-4336-99d5-570d83e4baaa","Type":"ContainerDied","Data":"6bb09a0e711ff60e20706cda170848a83267740db4bd8dbf04824ff14d1736e8"} Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.304398 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bb09a0e711ff60e20706cda170848a83267740db4bd8dbf04824ff14d1736e8" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.304455 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a4a6-account-create-update-tz4l9" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.306914 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" event={"ID":"80e98ac0-3018-4566-95b3-2d2dfd3e234e","Type":"ContainerDied","Data":"fc9e0b36ba215a7b15a043e893474e804ffe85a7b980f7c6bfcf172eebfcba2c"} Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.306960 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc9e0b36ba215a7b15a043e893474e804ffe85a7b980f7c6bfcf172eebfcba2c" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.307028 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-fc68-account-create-update-tfrzr" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.321323 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-j7vgh" event={"ID":"2fc398d7-f426-420d-981c-6bda415a2ce0","Type":"ContainerDied","Data":"b5f8ebe3d33920b34ec3478c99b9fa3b4b46afe5cb6d104ea5353e6955b7bf88"} Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.321364 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5f8ebe3d33920b34ec3478c99b9fa3b4b46afe5cb6d104ea5353e6955b7bf88" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.321482 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-j7vgh" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.335768 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9gsgt" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.335798 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9gsgt" event={"ID":"efae98df-8f23-4e6b-bad0-f2c7a58fb86d","Type":"ContainerDied","Data":"22014db8dc001ca072a53bc19b4abaaf826ac34b43c61bc952f4be5c2e88203d"} Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.335834 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22014db8dc001ca072a53bc19b4abaaf826ac34b43c61bc952f4be5c2e88203d" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.341479 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" event={"ID":"0b1a4d80-a736-41c3-9157-c0a696c10eff","Type":"ContainerDied","Data":"39e7d198d33d90bf562a2f8d83ddbd204e351ae8b0c5bc3ebf6ffd290d67ecd5"} Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.341523 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39e7d198d33d90bf562a2f8d83ddbd204e351ae8b0c5bc3ebf6ffd290d67ecd5" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.341613 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b3d3-account-create-update-jv5jh" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.342769 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.342802 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.360073 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vj27c" event={"ID":"768cc9af-66f9-4972-a2b4-a69b0fb15b3d","Type":"ContainerDied","Data":"f1474b2df1783edf7e36b069e0ed66b3d3ea5512e978fe4795cb650c4c998b7c"} Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.360106 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1474b2df1783edf7e36b069e0ed66b3d3ea5512e978fe4795cb650c4c998b7c" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.360152 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vj27c" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.375772 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hn7v\" (UniqueName: \"kubernetes.io/projected/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-kube-api-access-6hn7v\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.375808 5012 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768cc9af-66f9-4972-a2b4-a69b0fb15b3d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.405536 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 05:44:43 crc kubenswrapper[5012]: I0219 05:44:43.419438 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.051421 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.093742 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-combined-ca-bundle\") pod \"20fc844f-415a-4c39-b2ac-966ff2a43a43\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.093855 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-config\") pod \"20fc844f-415a-4c39-b2ac-966ff2a43a43\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.093941 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-ovndb-tls-certs\") pod \"20fc844f-415a-4c39-b2ac-966ff2a43a43\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.094085 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-httpd-config\") pod \"20fc844f-415a-4c39-b2ac-966ff2a43a43\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.094166 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm6t4\" (UniqueName: \"kubernetes.io/projected/20fc844f-415a-4c39-b2ac-966ff2a43a43-kube-api-access-cm6t4\") pod \"20fc844f-415a-4c39-b2ac-966ff2a43a43\" (UID: \"20fc844f-415a-4c39-b2ac-966ff2a43a43\") " Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.100495 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20fc844f-415a-4c39-b2ac-966ff2a43a43-kube-api-access-cm6t4" (OuterVolumeSpecName: "kube-api-access-cm6t4") pod "20fc844f-415a-4c39-b2ac-966ff2a43a43" (UID: "20fc844f-415a-4c39-b2ac-966ff2a43a43"). InnerVolumeSpecName "kube-api-access-cm6t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.101435 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "20fc844f-415a-4c39-b2ac-966ff2a43a43" (UID: "20fc844f-415a-4c39-b2ac-966ff2a43a43"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.195807 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "20fc844f-415a-4c39-b2ac-966ff2a43a43" (UID: "20fc844f-415a-4c39-b2ac-966ff2a43a43"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.198424 5012 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.198459 5012 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.198469 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm6t4\" (UniqueName: \"kubernetes.io/projected/20fc844f-415a-4c39-b2ac-966ff2a43a43-kube-api-access-cm6t4\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.201410 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20fc844f-415a-4c39-b2ac-966ff2a43a43" (UID: "20fc844f-415a-4c39-b2ac-966ff2a43a43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.214733 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-config" (OuterVolumeSpecName: "config") pod "20fc844f-415a-4c39-b2ac-966ff2a43a43" (UID: "20fc844f-415a-4c39-b2ac-966ff2a43a43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.299416 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.299445 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/20fc844f-415a-4c39-b2ac-966ff2a43a43-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.371158 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2e6ffe3-5533-459b-989b-e04f94b8f8ba","Type":"ContainerStarted","Data":"694cb7239194668fdd96877662e230d283d111646e3e233d72ff54fa322e04ce"} Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.371278 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="ceilometer-central-agent" containerID="cri-o://34a399338c013b61152c60fcd0046303ede4ee51c443dfcf2a65805c9c44defe" gracePeriod=30 Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.371339 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.371395 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="proxy-httpd" containerID="cri-o://694cb7239194668fdd96877662e230d283d111646e3e233d72ff54fa322e04ce" gracePeriod=30 Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.371431 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="sg-core" containerID="cri-o://cb200dd76cd661f7ff34b71bfb488f08698c2c8969d0994a64b2d1b69bb789ec" gracePeriod=30 Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.371481 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="ceilometer-notification-agent" containerID="cri-o://4b17f7e35bacf75c95fd5af2ce831c9268ee336939f6e0582d263b98f40338b3" gracePeriod=30 Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.374809 5012 generic.go:334] "Generic (PLEG): container finished" podID="20fc844f-415a-4c39-b2ac-966ff2a43a43" containerID="9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a" exitCode=0 Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.375604 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77b847d784-sfqqm" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.376404 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77b847d784-sfqqm" event={"ID":"20fc844f-415a-4c39-b2ac-966ff2a43a43","Type":"ContainerDied","Data":"9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a"} Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.376511 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77b847d784-sfqqm" event={"ID":"20fc844f-415a-4c39-b2ac-966ff2a43a43","Type":"ContainerDied","Data":"05d6404e6cfe0f5924141acac1a5c449939eddf44dc7eb77958158988b1bb5ee"} Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.376614 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.376707 5012 scope.go:117] "RemoveContainer" containerID="6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.376978 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.386026 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.386703 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.396218 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.565078825 podStartE2EDuration="5.396196148s" podCreationTimestamp="2026-02-19 05:44:39 +0000 UTC" firstStartedPulling="2026-02-19 05:44:40.643203949 +0000 UTC m=+1176.676526518" lastFinishedPulling="2026-02-19 05:44:43.474321272 +0000 UTC m=+1179.507643841" observedRunningTime="2026-02-19 05:44:44.391801341 +0000 UTC m=+1180.425123900" watchObservedRunningTime="2026-02-19 05:44:44.396196148 +0000 UTC m=+1180.429518717" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.417394 5012 scope.go:117] "RemoveContainer" containerID="9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.422548 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-77b847d784-sfqqm"] Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.430877 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-77b847d784-sfqqm"] Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.436981 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.450385 5012 scope.go:117] "RemoveContainer" containerID="6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f" Feb 19 05:44:44 crc kubenswrapper[5012]: E0219 05:44:44.454371 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f\": container with ID starting with 6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f not found: ID does not exist" containerID="6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.454403 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f"} err="failed to get container status \"6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f\": rpc error: code = NotFound desc = could not find container \"6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f\": container with ID starting with 6ef0e95965d7a44b19e276aab29d03a7363b42193318fc36c3ca62b6aabb695f not found: ID does not exist" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.454420 5012 scope.go:117] "RemoveContainer" containerID="9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a" Feb 19 05:44:44 crc kubenswrapper[5012]: E0219 05:44:44.457361 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a\": container with ID starting with 9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a not found: ID does not exist" containerID="9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.457387 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a"} err="failed to get container status \"9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a\": rpc error: code = NotFound desc = could not find container \"9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a\": container with ID starting with 9b13242d6a7d2ee338575299e982e0eae0ed17b24e3f44231487a39fbe192f6a not found: ID does not exist" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.459856 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:44 crc kubenswrapper[5012]: I0219 05:44:44.712560 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20fc844f-415a-4c39-b2ac-966ff2a43a43" path="/var/lib/kubelet/pods/20fc844f-415a-4c39-b2ac-966ff2a43a43/volumes" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.395818 5012 generic.go:334] "Generic (PLEG): container finished" podID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerID="694cb7239194668fdd96877662e230d283d111646e3e233d72ff54fa322e04ce" exitCode=0 Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.396127 5012 generic.go:334] "Generic (PLEG): container finished" podID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerID="cb200dd76cd661f7ff34b71bfb488f08698c2c8969d0994a64b2d1b69bb789ec" exitCode=2 Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.396136 5012 generic.go:334] "Generic (PLEG): container finished" podID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerID="4b17f7e35bacf75c95fd5af2ce831c9268ee336939f6e0582d263b98f40338b3" exitCode=0 Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.396142 5012 generic.go:334] "Generic (PLEG): container finished" podID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerID="34a399338c013b61152c60fcd0046303ede4ee51c443dfcf2a65805c9c44defe" exitCode=0 Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.396227 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2e6ffe3-5533-459b-989b-e04f94b8f8ba","Type":"ContainerDied","Data":"694cb7239194668fdd96877662e230d283d111646e3e233d72ff54fa322e04ce"} Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.396283 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2e6ffe3-5533-459b-989b-e04f94b8f8ba","Type":"ContainerDied","Data":"cb200dd76cd661f7ff34b71bfb488f08698c2c8969d0994a64b2d1b69bb789ec"} Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.396294 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2e6ffe3-5533-459b-989b-e04f94b8f8ba","Type":"ContainerDied","Data":"4b17f7e35bacf75c95fd5af2ce831c9268ee336939f6e0582d263b98f40338b3"} Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.396354 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2e6ffe3-5533-459b-989b-e04f94b8f8ba","Type":"ContainerDied","Data":"34a399338c013b61152c60fcd0046303ede4ee51c443dfcf2a65805c9c44defe"} Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.396363 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2e6ffe3-5533-459b-989b-e04f94b8f8ba","Type":"ContainerDied","Data":"dd27232efe40574ec6d4be8487ee757105954e66ecc2b8f597ac24a25d2b5f76"} Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.396373 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd27232efe40574ec6d4be8487ee757105954e66ecc2b8f597ac24a25d2b5f76" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.407545 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.409413 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.419336 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.537082 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wfvd\" (UniqueName: \"kubernetes.io/projected/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-kube-api-access-2wfvd\") pod \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.538088 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-sg-core-conf-yaml\") pod \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.538118 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-config-data\") pod \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.538144 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-run-httpd\") pod \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.538166 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-log-httpd\") pod \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.538210 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-combined-ca-bundle\") pod \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.538262 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-scripts\") pod \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\" (UID: \"f2e6ffe3-5533-459b-989b-e04f94b8f8ba\") " Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.538401 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f2e6ffe3-5533-459b-989b-e04f94b8f8ba" (UID: "f2e6ffe3-5533-459b-989b-e04f94b8f8ba"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.538682 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f2e6ffe3-5533-459b-989b-e04f94b8f8ba" (UID: "f2e6ffe3-5533-459b-989b-e04f94b8f8ba"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.539219 5012 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.539258 5012 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.545337 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-kube-api-access-2wfvd" (OuterVolumeSpecName: "kube-api-access-2wfvd") pod "f2e6ffe3-5533-459b-989b-e04f94b8f8ba" (UID: "f2e6ffe3-5533-459b-989b-e04f94b8f8ba"). InnerVolumeSpecName "kube-api-access-2wfvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.548825 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-scripts" (OuterVolumeSpecName: "scripts") pod "f2e6ffe3-5533-459b-989b-e04f94b8f8ba" (UID: "f2e6ffe3-5533-459b-989b-e04f94b8f8ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.584544 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f2e6ffe3-5533-459b-989b-e04f94b8f8ba" (UID: "f2e6ffe3-5533-459b-989b-e04f94b8f8ba"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.640645 5012 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.640812 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.640904 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wfvd\" (UniqueName: \"kubernetes.io/projected/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-kube-api-access-2wfvd\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.667522 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2e6ffe3-5533-459b-989b-e04f94b8f8ba" (UID: "f2e6ffe3-5533-459b-989b-e04f94b8f8ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.676691 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-config-data" (OuterVolumeSpecName: "config-data") pod "f2e6ffe3-5533-459b-989b-e04f94b8f8ba" (UID: "f2e6ffe3-5533-459b-989b-e04f94b8f8ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.743354 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:45 crc kubenswrapper[5012]: I0219 05:44:45.743540 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e6ffe3-5533-459b-989b-e04f94b8f8ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.385876 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.390906 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.462103 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.507133 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.525624 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.531942 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532400 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80e98ac0-3018-4566-95b3-2d2dfd3e234e" containerName="mariadb-account-create-update" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532614 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="80e98ac0-3018-4566-95b3-2d2dfd3e234e" containerName="mariadb-account-create-update" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532627 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b1a4d80-a736-41c3-9157-c0a696c10eff" containerName="mariadb-account-create-update" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532634 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b1a4d80-a736-41c3-9157-c0a696c10eff" containerName="mariadb-account-create-update" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532651 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="sg-core" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532657 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="sg-core" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532673 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20fc844f-415a-4c39-b2ac-966ff2a43a43" containerName="neutron-httpd" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532679 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="20fc844f-415a-4c39-b2ac-966ff2a43a43" containerName="neutron-httpd" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532687 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="768cc9af-66f9-4972-a2b4-a69b0fb15b3d" containerName="mariadb-database-create" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532692 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="768cc9af-66f9-4972-a2b4-a69b0fb15b3d" containerName="mariadb-database-create" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532711 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd4d5a16-81ab-4336-99d5-570d83e4baaa" containerName="mariadb-account-create-update" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532716 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd4d5a16-81ab-4336-99d5-570d83e4baaa" containerName="mariadb-account-create-update" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532726 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc398d7-f426-420d-981c-6bda415a2ce0" containerName="mariadb-database-create" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532732 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc398d7-f426-420d-981c-6bda415a2ce0" containerName="mariadb-database-create" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532744 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efae98df-8f23-4e6b-bad0-f2c7a58fb86d" containerName="mariadb-database-create" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532749 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="efae98df-8f23-4e6b-bad0-f2c7a58fb86d" containerName="mariadb-database-create" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532758 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="ceilometer-notification-agent" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532764 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="ceilometer-notification-agent" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532775 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="proxy-httpd" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532781 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="proxy-httpd" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532792 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="ceilometer-central-agent" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532800 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="ceilometer-central-agent" Feb 19 05:44:46 crc kubenswrapper[5012]: E0219 05:44:46.532811 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20fc844f-415a-4c39-b2ac-966ff2a43a43" containerName="neutron-api" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532816 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="20fc844f-415a-4c39-b2ac-966ff2a43a43" containerName="neutron-api" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.532985 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="sg-core" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533000 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc398d7-f426-420d-981c-6bda415a2ce0" containerName="mariadb-database-create" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533011 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="ceilometer-notification-agent" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533023 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="20fc844f-415a-4c39-b2ac-966ff2a43a43" containerName="neutron-api" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533033 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="80e98ac0-3018-4566-95b3-2d2dfd3e234e" containerName="mariadb-account-create-update" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533042 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd4d5a16-81ab-4336-99d5-570d83e4baaa" containerName="mariadb-account-create-update" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533050 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="20fc844f-415a-4c39-b2ac-966ff2a43a43" containerName="neutron-httpd" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533062 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="efae98df-8f23-4e6b-bad0-f2c7a58fb86d" containerName="mariadb-database-create" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533070 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b1a4d80-a736-41c3-9157-c0a696c10eff" containerName="mariadb-account-create-update" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533077 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="proxy-httpd" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533087 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" containerName="ceilometer-central-agent" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.533096 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="768cc9af-66f9-4972-a2b4-a69b0fb15b3d" containerName="mariadb-database-create" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.534822 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.537750 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.538026 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.543791 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.660457 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.660729 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-586kp\" (UniqueName: \"kubernetes.io/projected/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-kube-api-access-586kp\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.660781 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-run-httpd\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.660808 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-config-data\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.660826 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-log-httpd\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.660858 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.660898 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-scripts\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.704594 5012 scope.go:117] "RemoveContainer" containerID="3fe096d4e76671ad6ed28d2c1acfd3c50b1ec4a14f0f8ab2ef4419008e64c651" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.720647 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2e6ffe3-5533-459b-989b-e04f94b8f8ba" path="/var/lib/kubelet/pods/f2e6ffe3-5533-459b-989b-e04f94b8f8ba/volumes" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.763062 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-config-data\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.763106 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-log-httpd\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.763147 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.763194 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-scripts\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.763261 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.763280 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-586kp\" (UniqueName: \"kubernetes.io/projected/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-kube-api-access-586kp\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.763334 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-run-httpd\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.763790 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-run-httpd\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.763852 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-log-httpd\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.771209 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.772171 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-scripts\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.783055 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-config-data\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.788420 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.792325 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-586kp\" (UniqueName: \"kubernetes.io/projected/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-kube-api-access-586kp\") pod \"ceilometer-0\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " pod="openstack/ceilometer-0" Feb 19 05:44:46 crc kubenswrapper[5012]: I0219 05:44:46.863920 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:44:47 crc kubenswrapper[5012]: I0219 05:44:47.335501 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:47 crc kubenswrapper[5012]: I0219 05:44:47.336458 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 05:44:47 crc kubenswrapper[5012]: I0219 05:44:47.500687 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3420a7c2-fc4c-4491-bddd-64a534d6f3cd","Type":"ContainerStarted","Data":"8189678acbed8117b25379f69dcbf461a7f4e3e9e52f112862c9f9884dc160dd"} Feb 19 05:44:47 crc kubenswrapper[5012]: I0219 05:44:47.502588 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7fdaa495-6cde-409a-871a-e334ca3f2a91","Type":"ContainerStarted","Data":"2a0d41cf4d088f495c93e797c822d094e5bc72e3b75f843179ab4798684437d6"} Feb 19 05:44:47 crc kubenswrapper[5012]: I0219 05:44:47.514793 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:47 crc kubenswrapper[5012]: I0219 05:44:47.514902 5012 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 05:44:47 crc kubenswrapper[5012]: I0219 05:44:47.605766 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 05:44:48 crc kubenswrapper[5012]: I0219 05:44:48.522635 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3420a7c2-fc4c-4491-bddd-64a534d6f3cd","Type":"ContainerStarted","Data":"2208308f841fe7a9243f88d9f9187f00d17fea4911bec7a9bd65972f32feba5b"} Feb 19 05:44:48 crc kubenswrapper[5012]: I0219 05:44:48.523160 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3420a7c2-fc4c-4491-bddd-64a534d6f3cd","Type":"ContainerStarted","Data":"da75d7466a0ff406fa2154591bf96210b474445eaf7b118b66950fc5a8bf1d53"} Feb 19 05:44:48 crc kubenswrapper[5012]: I0219 05:44:48.907592 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vz94t"] Feb 19 05:44:48 crc kubenswrapper[5012]: I0219 05:44:48.908782 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:48 crc kubenswrapper[5012]: I0219 05:44:48.910824 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-bjq99" Feb 19 05:44:48 crc kubenswrapper[5012]: I0219 05:44:48.910981 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 05:44:48 crc kubenswrapper[5012]: I0219 05:44:48.911103 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 19 05:44:48 crc kubenswrapper[5012]: I0219 05:44:48.941580 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vz94t"] Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.004667 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-scripts\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.004721 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8kw9\" (UniqueName: \"kubernetes.io/projected/3f256783-305c-4782-81c0-5aed8867b7e3-kube-api-access-j8kw9\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.004837 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.004881 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-config-data\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.106594 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-config-data\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.106935 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-scripts\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.106972 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8kw9\" (UniqueName: \"kubernetes.io/projected/3f256783-305c-4782-81c0-5aed8867b7e3-kube-api-access-j8kw9\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.107072 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.110558 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.111049 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-scripts\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.113405 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-config-data\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.127752 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8kw9\" (UniqueName: \"kubernetes.io/projected/3f256783-305c-4782-81c0-5aed8867b7e3-kube-api-access-j8kw9\") pod \"nova-cell0-conductor-db-sync-vz94t\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.237192 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.549817 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3420a7c2-fc4c-4491-bddd-64a534d6f3cd","Type":"ContainerStarted","Data":"7dc207a7d7ed60a60ea64cf89b3f19074f008a063f98dae3bad25f208ef9a023"} Feb 19 05:44:49 crc kubenswrapper[5012]: W0219 05:44:49.871432 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f256783_305c_4782_81c0_5aed8867b7e3.slice/crio-febd118a98b60967b90a09d8286afcce4d62eeec224ba263a32dcf0170bba3da WatchSource:0}: Error finding container febd118a98b60967b90a09d8286afcce4d62eeec224ba263a32dcf0170bba3da: Status 404 returned error can't find the container with id febd118a98b60967b90a09d8286afcce4d62eeec224ba263a32dcf0170bba3da Feb 19 05:44:49 crc kubenswrapper[5012]: I0219 05:44:49.875041 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vz94t"] Feb 19 05:44:50 crc kubenswrapper[5012]: I0219 05:44:50.565546 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3420a7c2-fc4c-4491-bddd-64a534d6f3cd","Type":"ContainerStarted","Data":"5f9e5835bb21eb902f863e74d75272487945828c4edd9c9a4570e94352117eb1"} Feb 19 05:44:50 crc kubenswrapper[5012]: I0219 05:44:50.567501 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 05:44:50 crc kubenswrapper[5012]: I0219 05:44:50.571170 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vz94t" event={"ID":"3f256783-305c-4782-81c0-5aed8867b7e3","Type":"ContainerStarted","Data":"febd118a98b60967b90a09d8286afcce4d62eeec224ba263a32dcf0170bba3da"} Feb 19 05:44:50 crc kubenswrapper[5012]: I0219 05:44:50.599200 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.029786951 podStartE2EDuration="4.599182565s" podCreationTimestamp="2026-02-19 05:44:46 +0000 UTC" firstStartedPulling="2026-02-19 05:44:47.336253312 +0000 UTC m=+1183.369575881" lastFinishedPulling="2026-02-19 05:44:49.905648926 +0000 UTC m=+1185.938971495" observedRunningTime="2026-02-19 05:44:50.590800211 +0000 UTC m=+1186.624122790" watchObservedRunningTime="2026-02-19 05:44:50.599182565 +0000 UTC m=+1186.632505144" Feb 19 05:44:51 crc kubenswrapper[5012]: I0219 05:44:51.808787 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:51 crc kubenswrapper[5012]: I0219 05:44:51.809201 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:51 crc kubenswrapper[5012]: I0219 05:44:51.860244 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:52 crc kubenswrapper[5012]: I0219 05:44:52.621282 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:52 crc kubenswrapper[5012]: I0219 05:44:52.663788 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 05:44:54 crc kubenswrapper[5012]: I0219 05:44:54.603414 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" containerID="cri-o://2a0d41cf4d088f495c93e797c822d094e5bc72e3b75f843179ab4798684437d6" gracePeriod=30 Feb 19 05:44:55 crc kubenswrapper[5012]: I0219 05:44:55.678629 5012 generic.go:334] "Generic (PLEG): container finished" podID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerID="2a0d41cf4d088f495c93e797c822d094e5bc72e3b75f843179ab4798684437d6" exitCode=0 Feb 19 05:44:55 crc kubenswrapper[5012]: I0219 05:44:55.678711 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7fdaa495-6cde-409a-871a-e334ca3f2a91","Type":"ContainerDied","Data":"2a0d41cf4d088f495c93e797c822d094e5bc72e3b75f843179ab4798684437d6"} Feb 19 05:44:55 crc kubenswrapper[5012]: I0219 05:44:55.678925 5012 scope.go:117] "RemoveContainer" containerID="3fe096d4e76671ad6ed28d2c1acfd3c50b1ec4a14f0f8ab2ef4419008e64c651" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.253774 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.389253 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fdaa495-6cde-409a-871a-e334ca3f2a91-logs\") pod \"7fdaa495-6cde-409a-871a-e334ca3f2a91\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.389376 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpx9g\" (UniqueName: \"kubernetes.io/projected/7fdaa495-6cde-409a-871a-e334ca3f2a91-kube-api-access-cpx9g\") pod \"7fdaa495-6cde-409a-871a-e334ca3f2a91\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.389482 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-combined-ca-bundle\") pod \"7fdaa495-6cde-409a-871a-e334ca3f2a91\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.389543 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-config-data\") pod \"7fdaa495-6cde-409a-871a-e334ca3f2a91\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.389615 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-custom-prometheus-ca\") pod \"7fdaa495-6cde-409a-871a-e334ca3f2a91\" (UID: \"7fdaa495-6cde-409a-871a-e334ca3f2a91\") " Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.389939 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fdaa495-6cde-409a-871a-e334ca3f2a91-logs" (OuterVolumeSpecName: "logs") pod "7fdaa495-6cde-409a-871a-e334ca3f2a91" (UID: "7fdaa495-6cde-409a-871a-e334ca3f2a91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.424761 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fdaa495-6cde-409a-871a-e334ca3f2a91-kube-api-access-cpx9g" (OuterVolumeSpecName: "kube-api-access-cpx9g") pod "7fdaa495-6cde-409a-871a-e334ca3f2a91" (UID: "7fdaa495-6cde-409a-871a-e334ca3f2a91"). InnerVolumeSpecName "kube-api-access-cpx9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.431549 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fdaa495-6cde-409a-871a-e334ca3f2a91" (UID: "7fdaa495-6cde-409a-871a-e334ca3f2a91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.434426 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "7fdaa495-6cde-409a-871a-e334ca3f2a91" (UID: "7fdaa495-6cde-409a-871a-e334ca3f2a91"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.461587 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-config-data" (OuterVolumeSpecName: "config-data") pod "7fdaa495-6cde-409a-871a-e334ca3f2a91" (UID: "7fdaa495-6cde-409a-871a-e334ca3f2a91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.492127 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.492341 5012 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.492423 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fdaa495-6cde-409a-871a-e334ca3f2a91-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.492498 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpx9g\" (UniqueName: \"kubernetes.io/projected/7fdaa495-6cde-409a-871a-e334ca3f2a91-kube-api-access-cpx9g\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.492565 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fdaa495-6cde-409a-871a-e334ca3f2a91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.691812 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"7fdaa495-6cde-409a-871a-e334ca3f2a91","Type":"ContainerDied","Data":"27cdd4f4a5ee55d08e9db9c6e3380ff5674b5137557956c3e1a7be05a457c3b6"} Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.692527 5012 scope.go:117] "RemoveContainer" containerID="2a0d41cf4d088f495c93e797c822d094e5bc72e3b75f843179ab4798684437d6" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.692765 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.738712 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.759104 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.775337 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 05:44:56 crc kubenswrapper[5012]: E0219 05:44:56.776081 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.776111 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:44:56 crc kubenswrapper[5012]: E0219 05:44:56.776128 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.776139 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:44:56 crc kubenswrapper[5012]: E0219 05:44:56.776156 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.776164 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.776463 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.776485 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.777477 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.785133 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.790025 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.903295 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f87036fc-fa94-4038-8b65-bb85d8ff6f10-config-data\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.903615 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f87036fc-fa94-4038-8b65-bb85d8ff6f10-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.903899 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f87036fc-fa94-4038-8b65-bb85d8ff6f10-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.904187 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f87036fc-fa94-4038-8b65-bb85d8ff6f10-logs\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:56 crc kubenswrapper[5012]: I0219 05:44:56.904407 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn6g8\" (UniqueName: \"kubernetes.io/projected/f87036fc-fa94-4038-8b65-bb85d8ff6f10-kube-api-access-xn6g8\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.017346 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f87036fc-fa94-4038-8b65-bb85d8ff6f10-logs\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.017507 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn6g8\" (UniqueName: \"kubernetes.io/projected/f87036fc-fa94-4038-8b65-bb85d8ff6f10-kube-api-access-xn6g8\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.017662 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f87036fc-fa94-4038-8b65-bb85d8ff6f10-config-data\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.017731 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f87036fc-fa94-4038-8b65-bb85d8ff6f10-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.017808 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f87036fc-fa94-4038-8b65-bb85d8ff6f10-logs\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.017859 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f87036fc-fa94-4038-8b65-bb85d8ff6f10-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.026231 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f87036fc-fa94-4038-8b65-bb85d8ff6f10-config-data\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.031053 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f87036fc-fa94-4038-8b65-bb85d8ff6f10-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.037125 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f87036fc-fa94-4038-8b65-bb85d8ff6f10-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.043893 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn6g8\" (UniqueName: \"kubernetes.io/projected/f87036fc-fa94-4038-8b65-bb85d8ff6f10-kube-api-access-xn6g8\") pod \"watcher-decision-engine-0\" (UID: \"f87036fc-fa94-4038-8b65-bb85d8ff6f10\") " pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.123343 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.147692 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.150617 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="ceilometer-central-agent" containerID="cri-o://da75d7466a0ff406fa2154591bf96210b474445eaf7b118b66950fc5a8bf1d53" gracePeriod=30 Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.150709 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="proxy-httpd" containerID="cri-o://5f9e5835bb21eb902f863e74d75272487945828c4edd9c9a4570e94352117eb1" gracePeriod=30 Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.150771 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="ceilometer-notification-agent" containerID="cri-o://2208308f841fe7a9243f88d9f9187f00d17fea4911bec7a9bd65972f32feba5b" gracePeriod=30 Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.150726 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="sg-core" containerID="cri-o://7dc207a7d7ed60a60ea64cf89b3f19074f008a063f98dae3bad25f208ef9a023" gracePeriod=30 Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.684822 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.705231 5012 generic.go:334] "Generic (PLEG): container finished" podID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerID="5f9e5835bb21eb902f863e74d75272487945828c4edd9c9a4570e94352117eb1" exitCode=0 Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.705276 5012 generic.go:334] "Generic (PLEG): container finished" podID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerID="7dc207a7d7ed60a60ea64cf89b3f19074f008a063f98dae3bad25f208ef9a023" exitCode=2 Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.705332 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3420a7c2-fc4c-4491-bddd-64a534d6f3cd","Type":"ContainerDied","Data":"5f9e5835bb21eb902f863e74d75272487945828c4edd9c9a4570e94352117eb1"} Feb 19 05:44:57 crc kubenswrapper[5012]: I0219 05:44:57.705422 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3420a7c2-fc4c-4491-bddd-64a534d6f3cd","Type":"ContainerDied","Data":"7dc207a7d7ed60a60ea64cf89b3f19074f008a063f98dae3bad25f208ef9a023"} Feb 19 05:44:58 crc kubenswrapper[5012]: I0219 05:44:58.713723 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" path="/var/lib/kubelet/pods/7fdaa495-6cde-409a-871a-e334ca3f2a91/volumes" Feb 19 05:44:59 crc kubenswrapper[5012]: I0219 05:44:59.729226 5012 generic.go:334] "Generic (PLEG): container finished" podID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerID="2208308f841fe7a9243f88d9f9187f00d17fea4911bec7a9bd65972f32feba5b" exitCode=0 Feb 19 05:44:59 crc kubenswrapper[5012]: I0219 05:44:59.729271 5012 generic.go:334] "Generic (PLEG): container finished" podID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerID="da75d7466a0ff406fa2154591bf96210b474445eaf7b118b66950fc5a8bf1d53" exitCode=0 Feb 19 05:44:59 crc kubenswrapper[5012]: I0219 05:44:59.729329 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3420a7c2-fc4c-4491-bddd-64a534d6f3cd","Type":"ContainerDied","Data":"2208308f841fe7a9243f88d9f9187f00d17fea4911bec7a9bd65972f32feba5b"} Feb 19 05:44:59 crc kubenswrapper[5012]: I0219 05:44:59.729369 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3420a7c2-fc4c-4491-bddd-64a534d6f3cd","Type":"ContainerDied","Data":"da75d7466a0ff406fa2154591bf96210b474445eaf7b118b66950fc5a8bf1d53"} Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.142916 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v"] Feb 19 05:45:00 crc kubenswrapper[5012]: E0219 05:45:00.143621 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.143639 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.143807 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.144493 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.146583 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.150285 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.156830 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v"] Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.282981 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9vnh\" (UniqueName: \"kubernetes.io/projected/46070367-1765-4a70-b997-58b87ee1fbf1-kube-api-access-g9vnh\") pod \"collect-profiles-29524665-pjx7v\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.283045 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46070367-1765-4a70-b997-58b87ee1fbf1-config-volume\") pod \"collect-profiles-29524665-pjx7v\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.283133 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46070367-1765-4a70-b997-58b87ee1fbf1-secret-volume\") pod \"collect-profiles-29524665-pjx7v\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.384967 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9vnh\" (UniqueName: \"kubernetes.io/projected/46070367-1765-4a70-b997-58b87ee1fbf1-kube-api-access-g9vnh\") pod \"collect-profiles-29524665-pjx7v\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.385080 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46070367-1765-4a70-b997-58b87ee1fbf1-config-volume\") pod \"collect-profiles-29524665-pjx7v\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.385147 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46070367-1765-4a70-b997-58b87ee1fbf1-secret-volume\") pod \"collect-profiles-29524665-pjx7v\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.386034 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46070367-1765-4a70-b997-58b87ee1fbf1-config-volume\") pod \"collect-profiles-29524665-pjx7v\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.400872 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46070367-1765-4a70-b997-58b87ee1fbf1-secret-volume\") pod \"collect-profiles-29524665-pjx7v\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.401089 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9vnh\" (UniqueName: \"kubernetes.io/projected/46070367-1765-4a70-b997-58b87ee1fbf1-kube-api-access-g9vnh\") pod \"collect-profiles-29524665-pjx7v\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:00 crc kubenswrapper[5012]: I0219 05:45:00.467637 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.622897 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.736415 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-log-httpd\") pod \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.737117 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3420a7c2-fc4c-4491-bddd-64a534d6f3cd" (UID: "3420a7c2-fc4c-4491-bddd-64a534d6f3cd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.737163 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-586kp\" (UniqueName: \"kubernetes.io/projected/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-kube-api-access-586kp\") pod \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.737200 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-run-httpd\") pod \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.737380 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-scripts\") pod \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.737566 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-combined-ca-bundle\") pod \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.737648 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-config-data\") pod \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.737816 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-sg-core-conf-yaml\") pod \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\" (UID: \"3420a7c2-fc4c-4491-bddd-64a534d6f3cd\") " Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.738550 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3420a7c2-fc4c-4491-bddd-64a534d6f3cd" (UID: "3420a7c2-fc4c-4491-bddd-64a534d6f3cd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.738678 5012 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.743956 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-scripts" (OuterVolumeSpecName: "scripts") pod "3420a7c2-fc4c-4491-bddd-64a534d6f3cd" (UID: "3420a7c2-fc4c-4491-bddd-64a534d6f3cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.751471 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-kube-api-access-586kp" (OuterVolumeSpecName: "kube-api-access-586kp") pod "3420a7c2-fc4c-4491-bddd-64a534d6f3cd" (UID: "3420a7c2-fc4c-4491-bddd-64a534d6f3cd"). InnerVolumeSpecName "kube-api-access-586kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.772728 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3420a7c2-fc4c-4491-bddd-64a534d6f3cd","Type":"ContainerDied","Data":"8189678acbed8117b25379f69dcbf461a7f4e3e9e52f112862c9f9884dc160dd"} Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.772738 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.772773 5012 scope.go:117] "RemoveContainer" containerID="5f9e5835bb21eb902f863e74d75272487945828c4edd9c9a4570e94352117eb1" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.781128 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"f87036fc-fa94-4038-8b65-bb85d8ff6f10","Type":"ContainerStarted","Data":"336e58c7b0dd00f44a9184d19ccf8426b738f727c252218560f8195d3a7f320e"} Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.781178 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"f87036fc-fa94-4038-8b65-bb85d8ff6f10","Type":"ContainerStarted","Data":"391ee38ba53c8e8e76588386f77305034c1cba17e5da69d1e81492ce766e692b"} Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.808081 5012 scope.go:117] "RemoveContainer" containerID="7dc207a7d7ed60a60ea64cf89b3f19074f008a063f98dae3bad25f208ef9a023" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.812489 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3420a7c2-fc4c-4491-bddd-64a534d6f3cd" (UID: "3420a7c2-fc4c-4491-bddd-64a534d6f3cd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.817491 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-vz94t" podStartSLOduration=2.146902028 podStartE2EDuration="14.817474171s" podCreationTimestamp="2026-02-19 05:44:48 +0000 UTC" firstStartedPulling="2026-02-19 05:44:49.873270548 +0000 UTC m=+1185.906593117" lastFinishedPulling="2026-02-19 05:45:02.543842651 +0000 UTC m=+1198.577165260" observedRunningTime="2026-02-19 05:45:02.798730215 +0000 UTC m=+1198.832052794" watchObservedRunningTime="2026-02-19 05:45:02.817474171 +0000 UTC m=+1198.850796740" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.822645 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3420a7c2-fc4c-4491-bddd-64a534d6f3cd" (UID: "3420a7c2-fc4c-4491-bddd-64a534d6f3cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.823115 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=6.823101498 podStartE2EDuration="6.823101498s" podCreationTimestamp="2026-02-19 05:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:45:02.813094124 +0000 UTC m=+1198.846416693" watchObservedRunningTime="2026-02-19 05:45:02.823101498 +0000 UTC m=+1198.856424067" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.831690 5012 scope.go:117] "RemoveContainer" containerID="2208308f841fe7a9243f88d9f9187f00d17fea4911bec7a9bd65972f32feba5b" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.841004 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.841030 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.841040 5012 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.841049 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-586kp\" (UniqueName: \"kubernetes.io/projected/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-kube-api-access-586kp\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.841057 5012 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.848430 5012 scope.go:117] "RemoveContainer" containerID="da75d7466a0ff406fa2154591bf96210b474445eaf7b118b66950fc5a8bf1d53" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.866373 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-config-data" (OuterVolumeSpecName: "config-data") pod "3420a7c2-fc4c-4491-bddd-64a534d6f3cd" (UID: "3420a7c2-fc4c-4491-bddd-64a534d6f3cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.943371 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3420a7c2-fc4c-4491-bddd-64a534d6f3cd-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:02 crc kubenswrapper[5012]: I0219 05:45:02.989095 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v"] Feb 19 05:45:02 crc kubenswrapper[5012]: W0219 05:45:02.991806 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46070367_1765_4a70_b997_58b87ee1fbf1.slice/crio-522859e776d29559dca0cff24a5d061eefa65ae60bc6b5fba45156ed6ac0e8f1 WatchSource:0}: Error finding container 522859e776d29559dca0cff24a5d061eefa65ae60bc6b5fba45156ed6ac0e8f1: Status 404 returned error can't find the container with id 522859e776d29559dca0cff24a5d061eefa65ae60bc6b5fba45156ed6ac0e8f1 Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.211855 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.229395 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.260422 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:03 crc kubenswrapper[5012]: E0219 05:45:03.261139 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="ceilometer-notification-agent" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.261156 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="ceilometer-notification-agent" Feb 19 05:45:03 crc kubenswrapper[5012]: E0219 05:45:03.261174 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="sg-core" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.261181 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="sg-core" Feb 19 05:45:03 crc kubenswrapper[5012]: E0219 05:45:03.261206 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="proxy-httpd" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.261228 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="proxy-httpd" Feb 19 05:45:03 crc kubenswrapper[5012]: E0219 05:45:03.261242 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="ceilometer-central-agent" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.261249 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="ceilometer-central-agent" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.261451 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="proxy-httpd" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.261465 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="ceilometer-notification-agent" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.261477 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="ceilometer-central-agent" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.261495 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" containerName="sg-core" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.261506 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fdaa495-6cde-409a-871a-e334ca3f2a91" containerName="watcher-decision-engine" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.263136 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.268793 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.269699 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.278827 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.352200 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-run-httpd\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.352464 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.352531 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-log-httpd\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.355488 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-scripts\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.355673 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-config-data\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.355828 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.355912 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2zr6\" (UniqueName: \"kubernetes.io/projected/01803024-8b09-46a8-849a-7129e5734fc5-kube-api-access-v2zr6\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.457726 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-config-data\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.457822 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.457854 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2zr6\" (UniqueName: \"kubernetes.io/projected/01803024-8b09-46a8-849a-7129e5734fc5-kube-api-access-v2zr6\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.457934 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-run-httpd\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.457964 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.458011 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-log-httpd\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.458050 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-scripts\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.458994 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-log-httpd\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.459330 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-run-httpd\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.463635 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.465734 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.466569 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-config-data\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.466594 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-scripts\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.483218 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2zr6\" (UniqueName: \"kubernetes.io/projected/01803024-8b09-46a8-849a-7129e5734fc5-kube-api-access-v2zr6\") pod \"ceilometer-0\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.582755 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.807066 5012 generic.go:334] "Generic (PLEG): container finished" podID="46070367-1765-4a70-b997-58b87ee1fbf1" containerID="ef8d233d5ce4a4673c65e084ba6deb20a57df07604ba44e351882efa60733381" exitCode=0 Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.807557 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" event={"ID":"46070367-1765-4a70-b997-58b87ee1fbf1","Type":"ContainerDied","Data":"ef8d233d5ce4a4673c65e084ba6deb20a57df07604ba44e351882efa60733381"} Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.807709 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" event={"ID":"46070367-1765-4a70-b997-58b87ee1fbf1","Type":"ContainerStarted","Data":"522859e776d29559dca0cff24a5d061eefa65ae60bc6b5fba45156ed6ac0e8f1"} Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.815195 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vz94t" event={"ID":"3f256783-305c-4782-81c0-5aed8867b7e3","Type":"ContainerStarted","Data":"b0ed53407a3cb3810cc4f0ec6ea8d71443cb0203ae2152d5e770b7f505f82370"} Feb 19 05:45:03 crc kubenswrapper[5012]: I0219 05:45:03.900366 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:04 crc kubenswrapper[5012]: I0219 05:45:04.735181 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3420a7c2-fc4c-4491-bddd-64a534d6f3cd" path="/var/lib/kubelet/pods/3420a7c2-fc4c-4491-bddd-64a534d6f3cd/volumes" Feb 19 05:45:04 crc kubenswrapper[5012]: I0219 05:45:04.829487 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01803024-8b09-46a8-849a-7129e5734fc5","Type":"ContainerStarted","Data":"9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5"} Feb 19 05:45:04 crc kubenswrapper[5012]: I0219 05:45:04.829532 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01803024-8b09-46a8-849a-7129e5734fc5","Type":"ContainerStarted","Data":"1a4c3e21ec02a97624b92d231eadc367c369bbe32cd7bae830f477cfab60fbad"} Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.209352 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.313417 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46070367-1765-4a70-b997-58b87ee1fbf1-config-volume\") pod \"46070367-1765-4a70-b997-58b87ee1fbf1\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.313560 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9vnh\" (UniqueName: \"kubernetes.io/projected/46070367-1765-4a70-b997-58b87ee1fbf1-kube-api-access-g9vnh\") pod \"46070367-1765-4a70-b997-58b87ee1fbf1\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.313730 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46070367-1765-4a70-b997-58b87ee1fbf1-secret-volume\") pod \"46070367-1765-4a70-b997-58b87ee1fbf1\" (UID: \"46070367-1765-4a70-b997-58b87ee1fbf1\") " Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.314639 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46070367-1765-4a70-b997-58b87ee1fbf1-config-volume" (OuterVolumeSpecName: "config-volume") pod "46070367-1765-4a70-b997-58b87ee1fbf1" (UID: "46070367-1765-4a70-b997-58b87ee1fbf1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.323460 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46070367-1765-4a70-b997-58b87ee1fbf1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "46070367-1765-4a70-b997-58b87ee1fbf1" (UID: "46070367-1765-4a70-b997-58b87ee1fbf1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.323488 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46070367-1765-4a70-b997-58b87ee1fbf1-kube-api-access-g9vnh" (OuterVolumeSpecName: "kube-api-access-g9vnh") pod "46070367-1765-4a70-b997-58b87ee1fbf1" (UID: "46070367-1765-4a70-b997-58b87ee1fbf1"). InnerVolumeSpecName "kube-api-access-g9vnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.415932 5012 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46070367-1765-4a70-b997-58b87ee1fbf1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.415970 5012 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46070367-1765-4a70-b997-58b87ee1fbf1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.415988 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9vnh\" (UniqueName: \"kubernetes.io/projected/46070367-1765-4a70-b997-58b87ee1fbf1-kube-api-access-g9vnh\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.844148 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.844629 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v" event={"ID":"46070367-1765-4a70-b997-58b87ee1fbf1","Type":"ContainerDied","Data":"522859e776d29559dca0cff24a5d061eefa65ae60bc6b5fba45156ed6ac0e8f1"} Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.844671 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="522859e776d29559dca0cff24a5d061eefa65ae60bc6b5fba45156ed6ac0e8f1" Feb 19 05:45:05 crc kubenswrapper[5012]: I0219 05:45:05.846521 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01803024-8b09-46a8-849a-7129e5734fc5","Type":"ContainerStarted","Data":"2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832"} Feb 19 05:45:06 crc kubenswrapper[5012]: I0219 05:45:06.866529 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01803024-8b09-46a8-849a-7129e5734fc5","Type":"ContainerStarted","Data":"e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b"} Feb 19 05:45:07 crc kubenswrapper[5012]: I0219 05:45:07.123917 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Feb 19 05:45:07 crc kubenswrapper[5012]: I0219 05:45:07.186319 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Feb 19 05:45:07 crc kubenswrapper[5012]: I0219 05:45:07.882852 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01803024-8b09-46a8-849a-7129e5734fc5","Type":"ContainerStarted","Data":"f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5"} Feb 19 05:45:07 crc kubenswrapper[5012]: I0219 05:45:07.883043 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Feb 19 05:45:07 crc kubenswrapper[5012]: I0219 05:45:07.917566 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9006012490000002 podStartE2EDuration="4.917540305s" podCreationTimestamp="2026-02-19 05:45:03 +0000 UTC" firstStartedPulling="2026-02-19 05:45:03.933259976 +0000 UTC m=+1199.966582545" lastFinishedPulling="2026-02-19 05:45:06.950199032 +0000 UTC m=+1202.983521601" observedRunningTime="2026-02-19 05:45:07.907787257 +0000 UTC m=+1203.941109826" watchObservedRunningTime="2026-02-19 05:45:07.917540305 +0000 UTC m=+1203.950862894" Feb 19 05:45:07 crc kubenswrapper[5012]: I0219 05:45:07.942393 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Feb 19 05:45:08 crc kubenswrapper[5012]: I0219 05:45:08.895509 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 05:45:17 crc kubenswrapper[5012]: I0219 05:45:17.004836 5012 generic.go:334] "Generic (PLEG): container finished" podID="3f256783-305c-4782-81c0-5aed8867b7e3" containerID="b0ed53407a3cb3810cc4f0ec6ea8d71443cb0203ae2152d5e770b7f505f82370" exitCode=0 Feb 19 05:45:17 crc kubenswrapper[5012]: I0219 05:45:17.004910 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vz94t" event={"ID":"3f256783-305c-4782-81c0-5aed8867b7e3","Type":"ContainerDied","Data":"b0ed53407a3cb3810cc4f0ec6ea8d71443cb0203ae2152d5e770b7f505f82370"} Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.474452 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.525875 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-scripts\") pod \"3f256783-305c-4782-81c0-5aed8867b7e3\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.526045 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8kw9\" (UniqueName: \"kubernetes.io/projected/3f256783-305c-4782-81c0-5aed8867b7e3-kube-api-access-j8kw9\") pod \"3f256783-305c-4782-81c0-5aed8867b7e3\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.526079 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-combined-ca-bundle\") pod \"3f256783-305c-4782-81c0-5aed8867b7e3\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.526120 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-config-data\") pod \"3f256783-305c-4782-81c0-5aed8867b7e3\" (UID: \"3f256783-305c-4782-81c0-5aed8867b7e3\") " Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.535297 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f256783-305c-4782-81c0-5aed8867b7e3-kube-api-access-j8kw9" (OuterVolumeSpecName: "kube-api-access-j8kw9") pod "3f256783-305c-4782-81c0-5aed8867b7e3" (UID: "3f256783-305c-4782-81c0-5aed8867b7e3"). InnerVolumeSpecName "kube-api-access-j8kw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.536016 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-scripts" (OuterVolumeSpecName: "scripts") pod "3f256783-305c-4782-81c0-5aed8867b7e3" (UID: "3f256783-305c-4782-81c0-5aed8867b7e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.579661 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-config-data" (OuterVolumeSpecName: "config-data") pod "3f256783-305c-4782-81c0-5aed8867b7e3" (UID: "3f256783-305c-4782-81c0-5aed8867b7e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.584441 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f256783-305c-4782-81c0-5aed8867b7e3" (UID: "3f256783-305c-4782-81c0-5aed8867b7e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.629527 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.629560 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8kw9\" (UniqueName: \"kubernetes.io/projected/3f256783-305c-4782-81c0-5aed8867b7e3-kube-api-access-j8kw9\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.629575 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:18 crc kubenswrapper[5012]: I0219 05:45:18.629588 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f256783-305c-4782-81c0-5aed8867b7e3-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.034632 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vz94t" event={"ID":"3f256783-305c-4782-81c0-5aed8867b7e3","Type":"ContainerDied","Data":"febd118a98b60967b90a09d8286afcce4d62eeec224ba263a32dcf0170bba3da"} Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.034707 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vz94t" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.034719 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="febd118a98b60967b90a09d8286afcce4d62eeec224ba263a32dcf0170bba3da" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.213938 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 05:45:19 crc kubenswrapper[5012]: E0219 05:45:19.214669 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46070367-1765-4a70-b997-58b87ee1fbf1" containerName="collect-profiles" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.214701 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="46070367-1765-4a70-b997-58b87ee1fbf1" containerName="collect-profiles" Feb 19 05:45:19 crc kubenswrapper[5012]: E0219 05:45:19.214722 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f256783-305c-4782-81c0-5aed8867b7e3" containerName="nova-cell0-conductor-db-sync" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.214736 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f256783-305c-4782-81c0-5aed8867b7e3" containerName="nova-cell0-conductor-db-sync" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.215092 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="46070367-1765-4a70-b997-58b87ee1fbf1" containerName="collect-profiles" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.215136 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f256783-305c-4782-81c0-5aed8867b7e3" containerName="nova-cell0-conductor-db-sync" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.216465 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.220110 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-bjq99" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.220824 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.227835 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.247336 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4558m\" (UniqueName: \"kubernetes.io/projected/6852caab-c1b6-40cd-b5df-88d22f6016bd-kube-api-access-4558m\") pod \"nova-cell0-conductor-0\" (UID: \"6852caab-c1b6-40cd-b5df-88d22f6016bd\") " pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.247425 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6852caab-c1b6-40cd-b5df-88d22f6016bd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6852caab-c1b6-40cd-b5df-88d22f6016bd\") " pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.247481 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6852caab-c1b6-40cd-b5df-88d22f6016bd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6852caab-c1b6-40cd-b5df-88d22f6016bd\") " pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.350679 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6852caab-c1b6-40cd-b5df-88d22f6016bd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6852caab-c1b6-40cd-b5df-88d22f6016bd\") " pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.351002 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4558m\" (UniqueName: \"kubernetes.io/projected/6852caab-c1b6-40cd-b5df-88d22f6016bd-kube-api-access-4558m\") pod \"nova-cell0-conductor-0\" (UID: \"6852caab-c1b6-40cd-b5df-88d22f6016bd\") " pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.351103 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6852caab-c1b6-40cd-b5df-88d22f6016bd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6852caab-c1b6-40cd-b5df-88d22f6016bd\") " pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.355892 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6852caab-c1b6-40cd-b5df-88d22f6016bd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6852caab-c1b6-40cd-b5df-88d22f6016bd\") " pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.356779 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6852caab-c1b6-40cd-b5df-88d22f6016bd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6852caab-c1b6-40cd-b5df-88d22f6016bd\") " pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.379535 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4558m\" (UniqueName: \"kubernetes.io/projected/6852caab-c1b6-40cd-b5df-88d22f6016bd-kube-api-access-4558m\") pod \"nova-cell0-conductor-0\" (UID: \"6852caab-c1b6-40cd-b5df-88d22f6016bd\") " pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.564171 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:19 crc kubenswrapper[5012]: I0219 05:45:19.916195 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 05:45:20 crc kubenswrapper[5012]: I0219 05:45:20.051769 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6852caab-c1b6-40cd-b5df-88d22f6016bd","Type":"ContainerStarted","Data":"8632a068f01cba262dbe94641df1a4dc199f5d9de4a76d5d019edc0991514ad1"} Feb 19 05:45:21 crc kubenswrapper[5012]: I0219 05:45:21.069989 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6852caab-c1b6-40cd-b5df-88d22f6016bd","Type":"ContainerStarted","Data":"b0916e8a409d5228426e51fc0080affcdf4fe2e92265e00325e20518ef1ee8d7"} Feb 19 05:45:21 crc kubenswrapper[5012]: I0219 05:45:21.071693 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:21 crc kubenswrapper[5012]: I0219 05:45:21.098748 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.098724335 podStartE2EDuration="2.098724335s" podCreationTimestamp="2026-02-19 05:45:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:45:21.091733365 +0000 UTC m=+1217.125055974" watchObservedRunningTime="2026-02-19 05:45:21.098724335 +0000 UTC m=+1217.132046904" Feb 19 05:45:29 crc kubenswrapper[5012]: I0219 05:45:29.615829 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.261529 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-nr45z"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.267812 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.276411 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.285248 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-nr45z"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.292999 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.302906 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.303096 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7wrx\" (UniqueName: \"kubernetes.io/projected/70ce9757-cdf1-4864-95ad-9d25fb9830a9-kube-api-access-m7wrx\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.303221 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-scripts\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.303326 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-config-data\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.406804 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-scripts\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.406871 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-config-data\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.406917 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.407019 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7wrx\" (UniqueName: \"kubernetes.io/projected/70ce9757-cdf1-4864-95ad-9d25fb9830a9-kube-api-access-m7wrx\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.415083 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-scripts\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.417711 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.427368 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.429749 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.435758 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.438259 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-config-data\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.445721 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.451328 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7wrx\" (UniqueName: \"kubernetes.io/projected/70ce9757-cdf1-4864-95ad-9d25fb9830a9-kube-api-access-m7wrx\") pod \"nova-cell0-cell-mapping-nr45z\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.509439 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9c5be03-d36f-4a6a-8359-535ed4ad505d-logs\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.509576 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.509618 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-config-data\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.509698 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzdm8\" (UniqueName: \"kubernetes.io/projected/b9c5be03-d36f-4a6a-8359-535ed4ad505d-kube-api-access-rzdm8\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.549414 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.551352 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.556167 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.577565 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.600008 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.627185 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzdm8\" (UniqueName: \"kubernetes.io/projected/b9c5be03-d36f-4a6a-8359-535ed4ad505d-kube-api-access-rzdm8\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.627262 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9c5be03-d36f-4a6a-8359-535ed4ad505d-logs\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.627352 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.627385 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-config-data\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.629711 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9c5be03-d36f-4a6a-8359-535ed4ad505d-logs\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.638668 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-config-data\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.649162 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.660169 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzdm8\" (UniqueName: \"kubernetes.io/projected/b9c5be03-d36f-4a6a-8359-535ed4ad505d-kube-api-access-rzdm8\") pod \"nova-api-0\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.732275 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de786391-8b45-4a24-9c56-2d4c86d5cfba-logs\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.732346 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.732405 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzrnl\" (UniqueName: \"kubernetes.io/projected/de786391-8b45-4a24-9c56-2d4c86d5cfba-kube-api-access-lzrnl\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.732475 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-config-data\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.744415 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-647496cc8f-4z5vx"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.745831 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.747078 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.755338 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.765872 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.767528 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647496cc8f-4z5vx"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.794998 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.807973 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.809966 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.818067 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.838662 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.838731 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfxlj\" (UniqueName: \"kubernetes.io/projected/c1589f54-6631-4004-b2a9-e253b43b0644-kube-api-access-mfxlj\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.838760 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-svc\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.838798 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-swift-storage-0\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.838819 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.838859 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzcfh\" (UniqueName: \"kubernetes.io/projected/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-kube-api-access-kzcfh\") pod \"nova-cell1-novncproxy-0\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.838880 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de786391-8b45-4a24-9c56-2d4c86d5cfba-logs\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.838901 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.838967 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzrnl\" (UniqueName: \"kubernetes.io/projected/de786391-8b45-4a24-9c56-2d4c86d5cfba-kube-api-access-lzrnl\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.839019 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-sb\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.839039 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-nb\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.839081 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-config-data\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.839097 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-config\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.840657 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de786391-8b45-4a24-9c56-2d4c86d5cfba-logs\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.850413 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.852722 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.861482 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.863154 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-config-data\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.911049 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzrnl\" (UniqueName: \"kubernetes.io/projected/de786391-8b45-4a24-9c56-2d4c86d5cfba-kube-api-access-lzrnl\") pod \"nova-metadata-0\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " pod="openstack/nova-metadata-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941252 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-sb\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941338 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-nb\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941377 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj8q6\" (UniqueName: \"kubernetes.io/projected/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-kube-api-access-fj8q6\") pod \"nova-scheduler-0\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941413 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-config-data\") pod \"nova-scheduler-0\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941435 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-config\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941463 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941494 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfxlj\" (UniqueName: \"kubernetes.io/projected/c1589f54-6631-4004-b2a9-e253b43b0644-kube-api-access-mfxlj\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941521 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-svc\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941561 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-swift-storage-0\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941588 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941611 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.941646 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzcfh\" (UniqueName: \"kubernetes.io/projected/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-kube-api-access-kzcfh\") pod \"nova-cell1-novncproxy-0\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.943521 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-sb\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.944082 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-nb\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.944657 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-config\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.962067 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.957110 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-swift-storage-0\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.964455 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-svc\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.980406 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzcfh\" (UniqueName: \"kubernetes.io/projected/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-kube-api-access-kzcfh\") pod \"nova-cell1-novncproxy-0\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:30 crc kubenswrapper[5012]: I0219 05:45:30.985001 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.022378 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfxlj\" (UniqueName: \"kubernetes.io/projected/c1589f54-6631-4004-b2a9-e253b43b0644-kube-api-access-mfxlj\") pod \"dnsmasq-dns-647496cc8f-4z5vx\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.044421 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj8q6\" (UniqueName: \"kubernetes.io/projected/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-kube-api-access-fj8q6\") pod \"nova-scheduler-0\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.044481 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-config-data\") pod \"nova-scheduler-0\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.044567 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.057108 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-config-data\") pod \"nova-scheduler-0\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.058066 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.069552 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj8q6\" (UniqueName: \"kubernetes.io/projected/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-kube-api-access-fj8q6\") pod \"nova-scheduler-0\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.101118 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.110352 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.166728 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.207531 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 05:45:31 crc kubenswrapper[5012]: I0219 05:45:31.440434 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-nr45z"] Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:31.629167 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:31.821425 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nbn8z"] Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:31.822924 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:31.825620 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:31.828699 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:31.828702 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nbn8z"] Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:31.984157 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-config-data\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:31.984753 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szzzk\" (UniqueName: \"kubernetes.io/projected/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-kube-api-access-szzzk\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:31.984825 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:31.984926 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-scripts\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.087166 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-config-data\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.087252 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szzzk\" (UniqueName: \"kubernetes.io/projected/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-kube-api-access-szzzk\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.087326 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.087401 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-scripts\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.105969 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-config-data\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.106151 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.109063 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-scripts\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.129749 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szzzk\" (UniqueName: \"kubernetes.io/projected/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-kube-api-access-szzzk\") pod \"nova-cell1-conductor-db-sync-nbn8z\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.149974 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.246819 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nr45z" event={"ID":"70ce9757-cdf1-4864-95ad-9d25fb9830a9","Type":"ContainerStarted","Data":"021fb32c8f118be6cb115c199b5bccac76ab7b25e96dc7239f4fa322280c2c3c"} Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.246881 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nr45z" event={"ID":"70ce9757-cdf1-4864-95ad-9d25fb9830a9","Type":"ContainerStarted","Data":"7bfe1197486b22ee1f88d8b65bc65fff0d81ce7b626661bba38e8562901e7bb1"} Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.250953 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9c5be03-d36f-4a6a-8359-535ed4ad505d","Type":"ContainerStarted","Data":"ecb97b91d6f4ced51237b91313e01c9556310ff6ba6362c7e6f2808ce0a033d1"} Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.604860 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-nr45z" podStartSLOduration=2.604830699 podStartE2EDuration="2.604830699s" podCreationTimestamp="2026-02-19 05:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:45:32.266584897 +0000 UTC m=+1228.299907516" watchObservedRunningTime="2026-02-19 05:45:32.604830699 +0000 UTC m=+1228.638153268" Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.640386 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.756617 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.786391 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647496cc8f-4z5vx"] Feb 19 05:45:32 crc kubenswrapper[5012]: W0219 05:45:32.797096 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb843c15_c78d_4b5e_91b3_31ec0befd9fe.slice/crio-ee7567e98958d50555ddbca81a211daa490b9d51437c109eb4b01f873305fc7c WatchSource:0}: Error finding container ee7567e98958d50555ddbca81a211daa490b9d51437c109eb4b01f873305fc7c: Status 404 returned error can't find the container with id ee7567e98958d50555ddbca81a211daa490b9d51437c109eb4b01f873305fc7c Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.815742 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:45:32 crc kubenswrapper[5012]: I0219 05:45:32.898020 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nbn8z"] Feb 19 05:45:33 crc kubenswrapper[5012]: I0219 05:45:33.272668 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de786391-8b45-4a24-9c56-2d4c86d5cfba","Type":"ContainerStarted","Data":"95592dc0a14ff2399f527d9c89ef6e64abfe874f4d909ae2d5fc00044b54d56c"} Feb 19 05:45:33 crc kubenswrapper[5012]: I0219 05:45:33.276278 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae","Type":"ContainerStarted","Data":"f59fd4a4ac42380c62e5cbec861422215a50132042a18819d7c99682128821ac"} Feb 19 05:45:33 crc kubenswrapper[5012]: I0219 05:45:33.281860 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nbn8z" event={"ID":"7fc8fbb1-0e37-419f-86e0-6ce8db99225d","Type":"ContainerStarted","Data":"d98f7ee44c86e6ff53f7f25188347cf80c40d13a66e4ecf4956ac175a094de8b"} Feb 19 05:45:33 crc kubenswrapper[5012]: I0219 05:45:33.283766 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fb843c15-c78d-4b5e-91b3-31ec0befd9fe","Type":"ContainerStarted","Data":"ee7567e98958d50555ddbca81a211daa490b9d51437c109eb4b01f873305fc7c"} Feb 19 05:45:33 crc kubenswrapper[5012]: I0219 05:45:33.289883 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" event={"ID":"c1589f54-6631-4004-b2a9-e253b43b0644","Type":"ContainerStarted","Data":"8a23cad7dbe6ef631f80ea11b62d7b988e6b72ef836fd0ba728b4bc06cb53bf4"} Feb 19 05:45:33 crc kubenswrapper[5012]: I0219 05:45:33.599295 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 05:45:34 crc kubenswrapper[5012]: I0219 05:45:34.290645 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:34 crc kubenswrapper[5012]: I0219 05:45:34.306394 5012 generic.go:334] "Generic (PLEG): container finished" podID="c1589f54-6631-4004-b2a9-e253b43b0644" containerID="ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8" exitCode=0 Feb 19 05:45:34 crc kubenswrapper[5012]: I0219 05:45:34.306455 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" event={"ID":"c1589f54-6631-4004-b2a9-e253b43b0644","Type":"ContainerDied","Data":"ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8"} Feb 19 05:45:34 crc kubenswrapper[5012]: I0219 05:45:34.307383 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 05:45:35 crc kubenswrapper[5012]: I0219 05:45:35.344672 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de786391-8b45-4a24-9c56-2d4c86d5cfba","Type":"ContainerStarted","Data":"c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3"} Feb 19 05:45:35 crc kubenswrapper[5012]: I0219 05:45:35.347123 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nbn8z" event={"ID":"7fc8fbb1-0e37-419f-86e0-6ce8db99225d","Type":"ContainerStarted","Data":"0b98797f9f7e97071d4699ed1c59c23ddac69aff6fb8708f48bdc42a56a8cf34"} Feb 19 05:45:35 crc kubenswrapper[5012]: I0219 05:45:35.349400 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc" gracePeriod=30 Feb 19 05:45:35 crc kubenswrapper[5012]: I0219 05:45:35.349500 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae","Type":"ContainerStarted","Data":"602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc"} Feb 19 05:45:35 crc kubenswrapper[5012]: I0219 05:45:35.357137 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9c5be03-d36f-4a6a-8359-535ed4ad505d","Type":"ContainerStarted","Data":"bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b"} Feb 19 05:45:35 crc kubenswrapper[5012]: I0219 05:45:35.368814 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-nbn8z" podStartSLOduration=4.368798819 podStartE2EDuration="4.368798819s" podCreationTimestamp="2026-02-19 05:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:45:35.360903717 +0000 UTC m=+1231.394226286" watchObservedRunningTime="2026-02-19 05:45:35.368798819 +0000 UTC m=+1231.402121388" Feb 19 05:45:35 crc kubenswrapper[5012]: I0219 05:45:35.372123 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" event={"ID":"c1589f54-6631-4004-b2a9-e253b43b0644","Type":"ContainerStarted","Data":"401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb"} Feb 19 05:45:35 crc kubenswrapper[5012]: I0219 05:45:35.373205 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:35 crc kubenswrapper[5012]: I0219 05:45:35.383503 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.193492346 podStartE2EDuration="5.383482306s" podCreationTimestamp="2026-02-19 05:45:30 +0000 UTC" firstStartedPulling="2026-02-19 05:45:32.633266301 +0000 UTC m=+1228.666588870" lastFinishedPulling="2026-02-19 05:45:34.823256261 +0000 UTC m=+1230.856578830" observedRunningTime="2026-02-19 05:45:35.375203805 +0000 UTC m=+1231.408526394" watchObservedRunningTime="2026-02-19 05:45:35.383482306 +0000 UTC m=+1231.416804895" Feb 19 05:45:35 crc kubenswrapper[5012]: I0219 05:45:35.406172 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" podStartSLOduration=5.406142867 podStartE2EDuration="5.406142867s" podCreationTimestamp="2026-02-19 05:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:45:35.397773613 +0000 UTC m=+1231.431096182" watchObservedRunningTime="2026-02-19 05:45:35.406142867 +0000 UTC m=+1231.439465436" Feb 19 05:45:36 crc kubenswrapper[5012]: I0219 05:45:36.102898 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:45:36 crc kubenswrapper[5012]: I0219 05:45:36.383720 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fb843c15-c78d-4b5e-91b3-31ec0befd9fe","Type":"ContainerStarted","Data":"afdc318ce7e7f31c55b83d198c0056a9143debe76f4068e0b8b55a3cd789f800"} Feb 19 05:45:36 crc kubenswrapper[5012]: I0219 05:45:36.386613 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de786391-8b45-4a24-9c56-2d4c86d5cfba","Type":"ContainerStarted","Data":"4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5"} Feb 19 05:45:36 crc kubenswrapper[5012]: I0219 05:45:36.386730 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="de786391-8b45-4a24-9c56-2d4c86d5cfba" containerName="nova-metadata-log" containerID="cri-o://c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3" gracePeriod=30 Feb 19 05:45:36 crc kubenswrapper[5012]: I0219 05:45:36.386930 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="de786391-8b45-4a24-9c56-2d4c86d5cfba" containerName="nova-metadata-metadata" containerID="cri-o://4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5" gracePeriod=30 Feb 19 05:45:36 crc kubenswrapper[5012]: I0219 05:45:36.394510 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9c5be03-d36f-4a6a-8359-535ed4ad505d","Type":"ContainerStarted","Data":"92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42"} Feb 19 05:45:36 crc kubenswrapper[5012]: I0219 05:45:36.410805 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.220773101 podStartE2EDuration="6.410769168s" podCreationTimestamp="2026-02-19 05:45:30 +0000 UTC" firstStartedPulling="2026-02-19 05:45:32.800622285 +0000 UTC m=+1228.833944854" lastFinishedPulling="2026-02-19 05:45:35.990618342 +0000 UTC m=+1232.023940921" observedRunningTime="2026-02-19 05:45:36.406950635 +0000 UTC m=+1232.440273204" watchObservedRunningTime="2026-02-19 05:45:36.410769168 +0000 UTC m=+1232.444091737" Feb 19 05:45:36 crc kubenswrapper[5012]: I0219 05:45:36.432666 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.336128885 podStartE2EDuration="6.43263525s" podCreationTimestamp="2026-02-19 05:45:30 +0000 UTC" firstStartedPulling="2026-02-19 05:45:32.766894114 +0000 UTC m=+1228.800216683" lastFinishedPulling="2026-02-19 05:45:34.863400479 +0000 UTC m=+1230.896723048" observedRunningTime="2026-02-19 05:45:36.431864951 +0000 UTC m=+1232.465187530" watchObservedRunningTime="2026-02-19 05:45:36.43263525 +0000 UTC m=+1232.465957819" Feb 19 05:45:36 crc kubenswrapper[5012]: I0219 05:45:36.459178 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.272823787 podStartE2EDuration="6.459146705s" podCreationTimestamp="2026-02-19 05:45:30 +0000 UTC" firstStartedPulling="2026-02-19 05:45:31.639994698 +0000 UTC m=+1227.673317267" lastFinishedPulling="2026-02-19 05:45:34.826317616 +0000 UTC m=+1230.859640185" observedRunningTime="2026-02-19 05:45:36.447105822 +0000 UTC m=+1232.480428411" watchObservedRunningTime="2026-02-19 05:45:36.459146705 +0000 UTC m=+1232.492469274" Feb 19 05:45:37 crc kubenswrapper[5012]: E0219 05:45:37.093259 5012 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde786391_8b45_4a24_9c56_2d4c86d5cfba.slice/crio-4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde786391_8b45_4a24_9c56_2d4c86d5cfba.slice/crio-conmon-4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5.scope\": RecentStats: unable to find data in memory cache]" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.363289 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.403156 5012 generic.go:334] "Generic (PLEG): container finished" podID="de786391-8b45-4a24-9c56-2d4c86d5cfba" containerID="4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5" exitCode=0 Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.403196 5012 generic.go:334] "Generic (PLEG): container finished" podID="de786391-8b45-4a24-9c56-2d4c86d5cfba" containerID="c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3" exitCode=143 Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.403278 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.404169 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de786391-8b45-4a24-9c56-2d4c86d5cfba","Type":"ContainerDied","Data":"4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5"} Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.404224 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de786391-8b45-4a24-9c56-2d4c86d5cfba","Type":"ContainerDied","Data":"c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3"} Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.404235 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"de786391-8b45-4a24-9c56-2d4c86d5cfba","Type":"ContainerDied","Data":"95592dc0a14ff2399f527d9c89ef6e64abfe874f4d909ae2d5fc00044b54d56c"} Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.404253 5012 scope.go:117] "RemoveContainer" containerID="4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.442963 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-combined-ca-bundle\") pod \"de786391-8b45-4a24-9c56-2d4c86d5cfba\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.443104 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzrnl\" (UniqueName: \"kubernetes.io/projected/de786391-8b45-4a24-9c56-2d4c86d5cfba-kube-api-access-lzrnl\") pod \"de786391-8b45-4a24-9c56-2d4c86d5cfba\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.443135 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-config-data\") pod \"de786391-8b45-4a24-9c56-2d4c86d5cfba\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.443161 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de786391-8b45-4a24-9c56-2d4c86d5cfba-logs\") pod \"de786391-8b45-4a24-9c56-2d4c86d5cfba\" (UID: \"de786391-8b45-4a24-9c56-2d4c86d5cfba\") " Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.444347 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de786391-8b45-4a24-9c56-2d4c86d5cfba-logs" (OuterVolumeSpecName: "logs") pod "de786391-8b45-4a24-9c56-2d4c86d5cfba" (UID: "de786391-8b45-4a24-9c56-2d4c86d5cfba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.448905 5012 scope.go:117] "RemoveContainer" containerID="c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.467797 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de786391-8b45-4a24-9c56-2d4c86d5cfba-kube-api-access-lzrnl" (OuterVolumeSpecName: "kube-api-access-lzrnl") pod "de786391-8b45-4a24-9c56-2d4c86d5cfba" (UID: "de786391-8b45-4a24-9c56-2d4c86d5cfba"). InnerVolumeSpecName "kube-api-access-lzrnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.485442 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de786391-8b45-4a24-9c56-2d4c86d5cfba" (UID: "de786391-8b45-4a24-9c56-2d4c86d5cfba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.526914 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-config-data" (OuterVolumeSpecName: "config-data") pod "de786391-8b45-4a24-9c56-2d4c86d5cfba" (UID: "de786391-8b45-4a24-9c56-2d4c86d5cfba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.545577 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzrnl\" (UniqueName: \"kubernetes.io/projected/de786391-8b45-4a24-9c56-2d4c86d5cfba-kube-api-access-lzrnl\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.545609 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.545621 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de786391-8b45-4a24-9c56-2d4c86d5cfba-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.545629 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de786391-8b45-4a24-9c56-2d4c86d5cfba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.584548 5012 scope.go:117] "RemoveContainer" containerID="4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5" Feb 19 05:45:37 crc kubenswrapper[5012]: E0219 05:45:37.586092 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5\": container with ID starting with 4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5 not found: ID does not exist" containerID="4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.586279 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5"} err="failed to get container status \"4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5\": rpc error: code = NotFound desc = could not find container \"4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5\": container with ID starting with 4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5 not found: ID does not exist" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.586395 5012 scope.go:117] "RemoveContainer" containerID="c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3" Feb 19 05:45:37 crc kubenswrapper[5012]: E0219 05:45:37.586808 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3\": container with ID starting with c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3 not found: ID does not exist" containerID="c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.586829 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3"} err="failed to get container status \"c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3\": rpc error: code = NotFound desc = could not find container \"c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3\": container with ID starting with c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3 not found: ID does not exist" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.586842 5012 scope.go:117] "RemoveContainer" containerID="4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.587185 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5"} err="failed to get container status \"4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5\": rpc error: code = NotFound desc = could not find container \"4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5\": container with ID starting with 4b87da0b50e399777beeab5ce602faf5e9cffe61f9050541c2a89b4475482fe5 not found: ID does not exist" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.587209 5012 scope.go:117] "RemoveContainer" containerID="c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.587446 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3"} err="failed to get container status \"c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3\": rpc error: code = NotFound desc = could not find container \"c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3\": container with ID starting with c8ca785d98d867e386d2812b3c67d490985a925b16d24305e232eaa08de511b3 not found: ID does not exist" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.738144 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.753180 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.768398 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:37 crc kubenswrapper[5012]: E0219 05:45:37.768888 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de786391-8b45-4a24-9c56-2d4c86d5cfba" containerName="nova-metadata-log" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.768905 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="de786391-8b45-4a24-9c56-2d4c86d5cfba" containerName="nova-metadata-log" Feb 19 05:45:37 crc kubenswrapper[5012]: E0219 05:45:37.768932 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de786391-8b45-4a24-9c56-2d4c86d5cfba" containerName="nova-metadata-metadata" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.768939 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="de786391-8b45-4a24-9c56-2d4c86d5cfba" containerName="nova-metadata-metadata" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.769136 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="de786391-8b45-4a24-9c56-2d4c86d5cfba" containerName="nova-metadata-log" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.769159 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="de786391-8b45-4a24-9c56-2d4c86d5cfba" containerName="nova-metadata-metadata" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.770764 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.776988 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.779356 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.820560 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.850905 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-config-data\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.851065 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f259e859-c226-472e-85d3-8b5a9c7ba66a-logs\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.851130 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.851154 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lzqd\" (UniqueName: \"kubernetes.io/projected/f259e859-c226-472e-85d3-8b5a9c7ba66a-kube-api-access-2lzqd\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.851207 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.953099 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f259e859-c226-472e-85d3-8b5a9c7ba66a-logs\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.953195 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.953234 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lzqd\" (UniqueName: \"kubernetes.io/projected/f259e859-c226-472e-85d3-8b5a9c7ba66a-kube-api-access-2lzqd\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.953291 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.953968 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-config-data\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.954160 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f259e859-c226-472e-85d3-8b5a9c7ba66a-logs\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.959240 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.959469 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.966071 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-config-data\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:37 crc kubenswrapper[5012]: I0219 05:45:37.972980 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lzqd\" (UniqueName: \"kubernetes.io/projected/f259e859-c226-472e-85d3-8b5a9c7ba66a-kube-api-access-2lzqd\") pod \"nova-metadata-0\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " pod="openstack/nova-metadata-0" Feb 19 05:45:38 crc kubenswrapper[5012]: I0219 05:45:38.095899 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:45:38 crc kubenswrapper[5012]: I0219 05:45:38.574827 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 05:45:38 crc kubenswrapper[5012]: I0219 05:45:38.575379 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6c04ef21-3d68-44e8-ba69-164f3b32b7a0" containerName="kube-state-metrics" containerID="cri-o://caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73" gracePeriod=30 Feb 19 05:45:38 crc kubenswrapper[5012]: I0219 05:45:38.585034 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:38 crc kubenswrapper[5012]: I0219 05:45:38.714083 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de786391-8b45-4a24-9c56-2d4c86d5cfba" path="/var/lib/kubelet/pods/de786391-8b45-4a24-9c56-2d4c86d5cfba/volumes" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.081910 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.179340 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcbl4\" (UniqueName: \"kubernetes.io/projected/6c04ef21-3d68-44e8-ba69-164f3b32b7a0-kube-api-access-jcbl4\") pod \"6c04ef21-3d68-44e8-ba69-164f3b32b7a0\" (UID: \"6c04ef21-3d68-44e8-ba69-164f3b32b7a0\") " Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.189224 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c04ef21-3d68-44e8-ba69-164f3b32b7a0-kube-api-access-jcbl4" (OuterVolumeSpecName: "kube-api-access-jcbl4") pod "6c04ef21-3d68-44e8-ba69-164f3b32b7a0" (UID: "6c04ef21-3d68-44e8-ba69-164f3b32b7a0"). InnerVolumeSpecName "kube-api-access-jcbl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.282714 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcbl4\" (UniqueName: \"kubernetes.io/projected/6c04ef21-3d68-44e8-ba69-164f3b32b7a0-kube-api-access-jcbl4\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.426342 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f259e859-c226-472e-85d3-8b5a9c7ba66a","Type":"ContainerStarted","Data":"a858fcdf83bd73b366127d94812fddc1ba76c33cc1bd9175a15eaf1cb800af07"} Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.426388 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f259e859-c226-472e-85d3-8b5a9c7ba66a","Type":"ContainerStarted","Data":"6d55bd48d5dc9a998906eb29d5ab2a1f6e9c30189d3ebafeea49dae332345272"} Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.426401 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f259e859-c226-472e-85d3-8b5a9c7ba66a","Type":"ContainerStarted","Data":"1f7a7e5be52d1531162f80520ab7a9bc9939afb0e0d82ef9812331d77771bcd1"} Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.427677 5012 generic.go:334] "Generic (PLEG): container finished" podID="6c04ef21-3d68-44e8-ba69-164f3b32b7a0" containerID="caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73" exitCode=2 Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.427725 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6c04ef21-3d68-44e8-ba69-164f3b32b7a0","Type":"ContainerDied","Data":"caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73"} Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.427735 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.427759 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6c04ef21-3d68-44e8-ba69-164f3b32b7a0","Type":"ContainerDied","Data":"1d78bdb8cd099c1e00c91080ebe4740fa66e2f4e7fc08f7ed987fe609d80ac23"} Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.427779 5012 scope.go:117] "RemoveContainer" containerID="caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.456870 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.456836832 podStartE2EDuration="2.456836832s" podCreationTimestamp="2026-02-19 05:45:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:45:39.450852177 +0000 UTC m=+1235.484174746" watchObservedRunningTime="2026-02-19 05:45:39.456836832 +0000 UTC m=+1235.490159401" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.465095 5012 scope.go:117] "RemoveContainer" containerID="caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73" Feb 19 05:45:39 crc kubenswrapper[5012]: E0219 05:45:39.465889 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73\": container with ID starting with caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73 not found: ID does not exist" containerID="caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.466067 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73"} err="failed to get container status \"caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73\": rpc error: code = NotFound desc = could not find container \"caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73\": container with ID starting with caf4a335e51dbdeb57eecd8eed937a999689ef3c0e38cbd1f847f04ad510ad73 not found: ID does not exist" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.498414 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.536422 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.555913 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 05:45:39 crc kubenswrapper[5012]: E0219 05:45:39.556665 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c04ef21-3d68-44e8-ba69-164f3b32b7a0" containerName="kube-state-metrics" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.556680 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c04ef21-3d68-44e8-ba69-164f3b32b7a0" containerName="kube-state-metrics" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.557079 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c04ef21-3d68-44e8-ba69-164f3b32b7a0" containerName="kube-state-metrics" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.558147 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.561842 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.562254 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.568185 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.690809 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.690918 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.690982 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4vnx\" (UniqueName: \"kubernetes.io/projected/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-kube-api-access-l4vnx\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.691048 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.793277 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.793384 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4vnx\" (UniqueName: \"kubernetes.io/projected/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-kube-api-access-l4vnx\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.793457 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.793492 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.799726 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.800002 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.800915 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.822861 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4vnx\" (UniqueName: \"kubernetes.io/projected/cc79bf66-4a34-43fe-ad03-4e6ce60d2c44-kube-api-access-l4vnx\") pod \"kube-state-metrics-0\" (UID: \"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44\") " pod="openstack/kube-state-metrics-0" Feb 19 05:45:39 crc kubenswrapper[5012]: I0219 05:45:39.895995 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 05:45:40 crc kubenswrapper[5012]: I0219 05:45:40.468012 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 05:45:40 crc kubenswrapper[5012]: I0219 05:45:40.715894 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c04ef21-3d68-44e8-ba69-164f3b32b7a0" path="/var/lib/kubelet/pods/6c04ef21-3d68-44e8-ba69-164f3b32b7a0/volumes" Feb 19 05:45:40 crc kubenswrapper[5012]: I0219 05:45:40.716723 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:40 crc kubenswrapper[5012]: I0219 05:45:40.717110 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="ceilometer-central-agent" containerID="cri-o://9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5" gracePeriod=30 Feb 19 05:45:40 crc kubenswrapper[5012]: I0219 05:45:40.717242 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="sg-core" containerID="cri-o://e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b" gracePeriod=30 Feb 19 05:45:40 crc kubenswrapper[5012]: I0219 05:45:40.717311 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="proxy-httpd" containerID="cri-o://f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5" gracePeriod=30 Feb 19 05:45:40 crc kubenswrapper[5012]: I0219 05:45:40.717351 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="ceilometer-notification-agent" containerID="cri-o://2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832" gracePeriod=30 Feb 19 05:45:40 crc kubenswrapper[5012]: I0219 05:45:40.853561 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 05:45:40 crc kubenswrapper[5012]: I0219 05:45:40.853648 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.112608 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.203493 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl"] Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.203885 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" podUID="9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" containerName="dnsmasq-dns" containerID="cri-o://0665dea2b78f255d6fbccb798f4cfaab479a2e00f62ee271920f433e530bc5cb" gracePeriod=10 Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.213045 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.213112 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.247840 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.454417 5012 generic.go:334] "Generic (PLEG): container finished" podID="70ce9757-cdf1-4864-95ad-9d25fb9830a9" containerID="021fb32c8f118be6cb115c199b5bccac76ab7b25e96dc7239f4fa322280c2c3c" exitCode=0 Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.454535 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nr45z" event={"ID":"70ce9757-cdf1-4864-95ad-9d25fb9830a9","Type":"ContainerDied","Data":"021fb32c8f118be6cb115c199b5bccac76ab7b25e96dc7239f4fa322280c2c3c"} Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.456079 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44","Type":"ContainerStarted","Data":"17c75905f5e68679c3ca398b8340bbca140cca23afe8f2f0be29edbc0c8b934f"} Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.456129 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cc79bf66-4a34-43fe-ad03-4e6ce60d2c44","Type":"ContainerStarted","Data":"47a48bb3171a3d55ebe0ae1b610ada66d436c3aa3c7e61ea1720b03a90b1d619"} Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.456705 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.465803 5012 generic.go:334] "Generic (PLEG): container finished" podID="01803024-8b09-46a8-849a-7129e5734fc5" containerID="f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5" exitCode=0 Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.465834 5012 generic.go:334] "Generic (PLEG): container finished" podID="01803024-8b09-46a8-849a-7129e5734fc5" containerID="e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b" exitCode=2 Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.465841 5012 generic.go:334] "Generic (PLEG): container finished" podID="01803024-8b09-46a8-849a-7129e5734fc5" containerID="9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5" exitCode=0 Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.465878 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01803024-8b09-46a8-849a-7129e5734fc5","Type":"ContainerDied","Data":"f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5"} Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.465897 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01803024-8b09-46a8-849a-7129e5734fc5","Type":"ContainerDied","Data":"e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b"} Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.465907 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01803024-8b09-46a8-849a-7129e5734fc5","Type":"ContainerDied","Data":"9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5"} Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.468594 5012 generic.go:334] "Generic (PLEG): container finished" podID="9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" containerID="0665dea2b78f255d6fbccb798f4cfaab479a2e00f62ee271920f433e530bc5cb" exitCode=0 Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.469380 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" event={"ID":"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e","Type":"ContainerDied","Data":"0665dea2b78f255d6fbccb798f4cfaab479a2e00f62ee271920f433e530bc5cb"} Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.514829 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.538358 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.082493658 podStartE2EDuration="2.538334632s" podCreationTimestamp="2026-02-19 05:45:39 +0000 UTC" firstStartedPulling="2026-02-19 05:45:40.48105089 +0000 UTC m=+1236.514373459" lastFinishedPulling="2026-02-19 05:45:40.936891874 +0000 UTC m=+1236.970214433" observedRunningTime="2026-02-19 05:45:41.5020943 +0000 UTC m=+1237.535416869" watchObservedRunningTime="2026-02-19 05:45:41.538334632 +0000 UTC m=+1237.571657201" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.784700 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.864482 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-nb\") pod \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.864597 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-svc\") pod \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.864829 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-swift-storage-0\") pod \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.864951 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-config\") pod \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.865042 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-sb\") pod \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.865199 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztbnr\" (UniqueName: \"kubernetes.io/projected/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-kube-api-access-ztbnr\") pod \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\" (UID: \"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e\") " Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.875619 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-kube-api-access-ztbnr" (OuterVolumeSpecName: "kube-api-access-ztbnr") pod "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" (UID: "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e"). InnerVolumeSpecName "kube-api-access-ztbnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.921048 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" (UID: "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.925175 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" (UID: "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.926185 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-config" (OuterVolumeSpecName: "config") pod "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" (UID: "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.926847 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" (UID: "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.931511 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" (UID: "9de87102-5cbd-4d8c-ae87-32fdcb58cf3e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.936476 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.936594 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.968390 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.968493 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.968548 5012 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.968597 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.968644 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:41 crc kubenswrapper[5012]: I0219 05:45:41.968690 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztbnr\" (UniqueName: \"kubernetes.io/projected/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e-kube-api-access-ztbnr\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.483016 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.483045 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl" event={"ID":"9de87102-5cbd-4d8c-ae87-32fdcb58cf3e","Type":"ContainerDied","Data":"5892f4877b405b9244dd43361effc1a470655536dbd633845dd04bd643dbfba5"} Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.483844 5012 scope.go:117] "RemoveContainer" containerID="0665dea2b78f255d6fbccb798f4cfaab479a2e00f62ee271920f433e530bc5cb" Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.541813 5012 scope.go:117] "RemoveContainer" containerID="ca6a3289326a3d74df11835a9c2f296bc10d31bbffc5d5c69c448a3f93f521ea" Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.555034 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl"] Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.572103 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c7cb6dcdc-qjtxl"] Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.753628 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" path="/var/lib/kubelet/pods/9de87102-5cbd-4d8c-ae87-32fdcb58cf3e/volumes" Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.917129 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.997175 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-config-data\") pod \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.997337 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7wrx\" (UniqueName: \"kubernetes.io/projected/70ce9757-cdf1-4864-95ad-9d25fb9830a9-kube-api-access-m7wrx\") pod \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.997676 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-combined-ca-bundle\") pod \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " Feb 19 05:45:42 crc kubenswrapper[5012]: I0219 05:45:42.998161 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-scripts\") pod \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\" (UID: \"70ce9757-cdf1-4864-95ad-9d25fb9830a9\") " Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.003350 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-scripts" (OuterVolumeSpecName: "scripts") pod "70ce9757-cdf1-4864-95ad-9d25fb9830a9" (UID: "70ce9757-cdf1-4864-95ad-9d25fb9830a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.018180 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70ce9757-cdf1-4864-95ad-9d25fb9830a9-kube-api-access-m7wrx" (OuterVolumeSpecName: "kube-api-access-m7wrx") pod "70ce9757-cdf1-4864-95ad-9d25fb9830a9" (UID: "70ce9757-cdf1-4864-95ad-9d25fb9830a9"). InnerVolumeSpecName "kube-api-access-m7wrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.032588 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-config-data" (OuterVolumeSpecName: "config-data") pod "70ce9757-cdf1-4864-95ad-9d25fb9830a9" (UID: "70ce9757-cdf1-4864-95ad-9d25fb9830a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.040129 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70ce9757-cdf1-4864-95ad-9d25fb9830a9" (UID: "70ce9757-cdf1-4864-95ad-9d25fb9830a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.096333 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.096392 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.101501 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.101552 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.101563 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70ce9757-cdf1-4864-95ad-9d25fb9830a9-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.101575 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7wrx\" (UniqueName: \"kubernetes.io/projected/70ce9757-cdf1-4864-95ad-9d25fb9830a9-kube-api-access-m7wrx\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.510857 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nr45z" event={"ID":"70ce9757-cdf1-4864-95ad-9d25fb9830a9","Type":"ContainerDied","Data":"7bfe1197486b22ee1f88d8b65bc65fff0d81ce7b626661bba38e8562901e7bb1"} Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.511265 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bfe1197486b22ee1f88d8b65bc65fff0d81ce7b626661bba38e8562901e7bb1" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.511390 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nr45z" Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.689341 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.689648 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerName="nova-api-log" containerID="cri-o://bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b" gracePeriod=30 Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.689741 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerName="nova-api-api" containerID="cri-o://92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42" gracePeriod=30 Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.776358 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.776650 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f259e859-c226-472e-85d3-8b5a9c7ba66a" containerName="nova-metadata-log" containerID="cri-o://6d55bd48d5dc9a998906eb29d5ab2a1f6e9c30189d3ebafeea49dae332345272" gracePeriod=30 Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.777239 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f259e859-c226-472e-85d3-8b5a9c7ba66a" containerName="nova-metadata-metadata" containerID="cri-o://a858fcdf83bd73b366127d94812fddc1ba76c33cc1bd9175a15eaf1cb800af07" gracePeriod=30 Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.789168 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:45:43 crc kubenswrapper[5012]: I0219 05:45:43.789738 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fb843c15-c78d-4b5e-91b3-31ec0befd9fe" containerName="nova-scheduler-scheduler" containerID="cri-o://afdc318ce7e7f31c55b83d198c0056a9143debe76f4068e0b8b55a3cd789f800" gracePeriod=30 Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.431728 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.431794 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.538549 5012 generic.go:334] "Generic (PLEG): container finished" podID="f259e859-c226-472e-85d3-8b5a9c7ba66a" containerID="a858fcdf83bd73b366127d94812fddc1ba76c33cc1bd9175a15eaf1cb800af07" exitCode=0 Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.538582 5012 generic.go:334] "Generic (PLEG): container finished" podID="f259e859-c226-472e-85d3-8b5a9c7ba66a" containerID="6d55bd48d5dc9a998906eb29d5ab2a1f6e9c30189d3ebafeea49dae332345272" exitCode=143 Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.538637 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f259e859-c226-472e-85d3-8b5a9c7ba66a","Type":"ContainerDied","Data":"a858fcdf83bd73b366127d94812fddc1ba76c33cc1bd9175a15eaf1cb800af07"} Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.538666 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f259e859-c226-472e-85d3-8b5a9c7ba66a","Type":"ContainerDied","Data":"6d55bd48d5dc9a998906eb29d5ab2a1f6e9c30189d3ebafeea49dae332345272"} Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.544294 5012 generic.go:334] "Generic (PLEG): container finished" podID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerID="bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b" exitCode=143 Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.544353 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9c5be03-d36f-4a6a-8359-535ed4ad505d","Type":"ContainerDied","Data":"bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b"} Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.763575 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.837880 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-config-data\") pod \"f259e859-c226-472e-85d3-8b5a9c7ba66a\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.837934 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f259e859-c226-472e-85d3-8b5a9c7ba66a-logs\") pod \"f259e859-c226-472e-85d3-8b5a9c7ba66a\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.838017 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-combined-ca-bundle\") pod \"f259e859-c226-472e-85d3-8b5a9c7ba66a\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.838141 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lzqd\" (UniqueName: \"kubernetes.io/projected/f259e859-c226-472e-85d3-8b5a9c7ba66a-kube-api-access-2lzqd\") pod \"f259e859-c226-472e-85d3-8b5a9c7ba66a\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.838177 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-nova-metadata-tls-certs\") pod \"f259e859-c226-472e-85d3-8b5a9c7ba66a\" (UID: \"f259e859-c226-472e-85d3-8b5a9c7ba66a\") " Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.839693 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f259e859-c226-472e-85d3-8b5a9c7ba66a-logs" (OuterVolumeSpecName: "logs") pod "f259e859-c226-472e-85d3-8b5a9c7ba66a" (UID: "f259e859-c226-472e-85d3-8b5a9c7ba66a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.846515 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f259e859-c226-472e-85d3-8b5a9c7ba66a-kube-api-access-2lzqd" (OuterVolumeSpecName: "kube-api-access-2lzqd") pod "f259e859-c226-472e-85d3-8b5a9c7ba66a" (UID: "f259e859-c226-472e-85d3-8b5a9c7ba66a"). InnerVolumeSpecName "kube-api-access-2lzqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.879519 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f259e859-c226-472e-85d3-8b5a9c7ba66a" (UID: "f259e859-c226-472e-85d3-8b5a9c7ba66a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.891511 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-config-data" (OuterVolumeSpecName: "config-data") pod "f259e859-c226-472e-85d3-8b5a9c7ba66a" (UID: "f259e859-c226-472e-85d3-8b5a9c7ba66a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.902847 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f259e859-c226-472e-85d3-8b5a9c7ba66a" (UID: "f259e859-c226-472e-85d3-8b5a9c7ba66a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.940080 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.940116 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f259e859-c226-472e-85d3-8b5a9c7ba66a-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.940125 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.940136 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lzqd\" (UniqueName: \"kubernetes.io/projected/f259e859-c226-472e-85d3-8b5a9c7ba66a-kube-api-access-2lzqd\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:44 crc kubenswrapper[5012]: I0219 05:45:44.940146 5012 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f259e859-c226-472e-85d3-8b5a9c7ba66a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.563348 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f259e859-c226-472e-85d3-8b5a9c7ba66a","Type":"ContainerDied","Data":"1f7a7e5be52d1531162f80520ab7a9bc9939afb0e0d82ef9812331d77771bcd1"} Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.563869 5012 scope.go:117] "RemoveContainer" containerID="a858fcdf83bd73b366127d94812fddc1ba76c33cc1bd9175a15eaf1cb800af07" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.563510 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.569125 5012 generic.go:334] "Generic (PLEG): container finished" podID="7fc8fbb1-0e37-419f-86e0-6ce8db99225d" containerID="0b98797f9f7e97071d4699ed1c59c23ddac69aff6fb8708f48bdc42a56a8cf34" exitCode=0 Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.569155 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nbn8z" event={"ID":"7fc8fbb1-0e37-419f-86e0-6ce8db99225d","Type":"ContainerDied","Data":"0b98797f9f7e97071d4699ed1c59c23ddac69aff6fb8708f48bdc42a56a8cf34"} Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.601043 5012 scope.go:117] "RemoveContainer" containerID="6d55bd48d5dc9a998906eb29d5ab2a1f6e9c30189d3ebafeea49dae332345272" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.623941 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.642683 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.652090 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:45 crc kubenswrapper[5012]: E0219 05:45:45.652523 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f259e859-c226-472e-85d3-8b5a9c7ba66a" containerName="nova-metadata-log" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.652544 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f259e859-c226-472e-85d3-8b5a9c7ba66a" containerName="nova-metadata-log" Feb 19 05:45:45 crc kubenswrapper[5012]: E0219 05:45:45.652567 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ce9757-cdf1-4864-95ad-9d25fb9830a9" containerName="nova-manage" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.652576 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ce9757-cdf1-4864-95ad-9d25fb9830a9" containerName="nova-manage" Feb 19 05:45:45 crc kubenswrapper[5012]: E0219 05:45:45.652594 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" containerName="dnsmasq-dns" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.652600 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" containerName="dnsmasq-dns" Feb 19 05:45:45 crc kubenswrapper[5012]: E0219 05:45:45.652617 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f259e859-c226-472e-85d3-8b5a9c7ba66a" containerName="nova-metadata-metadata" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.652623 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f259e859-c226-472e-85d3-8b5a9c7ba66a" containerName="nova-metadata-metadata" Feb 19 05:45:45 crc kubenswrapper[5012]: E0219 05:45:45.652646 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" containerName="init" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.652652 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" containerName="init" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.652845 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="70ce9757-cdf1-4864-95ad-9d25fb9830a9" containerName="nova-manage" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.652860 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f259e859-c226-472e-85d3-8b5a9c7ba66a" containerName="nova-metadata-metadata" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.652870 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="9de87102-5cbd-4d8c-ae87-32fdcb58cf3e" containerName="dnsmasq-dns" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.652886 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f259e859-c226-472e-85d3-8b5a9c7ba66a" containerName="nova-metadata-log" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.653908 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.656952 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.657192 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.660504 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.866191 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-logs\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.867064 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-config-data\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.867450 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.867566 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9rh7\" (UniqueName: \"kubernetes.io/projected/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-kube-api-access-q9rh7\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.867637 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.968903 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-logs\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.969011 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-config-data\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.969042 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.969087 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9rh7\" (UniqueName: \"kubernetes.io/projected/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-kube-api-access-q9rh7\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.969112 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.969413 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-logs\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.975281 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-config-data\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.976043 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.976987 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:45 crc kubenswrapper[5012]: I0219 05:45:45.989685 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9rh7\" (UniqueName: \"kubernetes.io/projected/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-kube-api-access-q9rh7\") pod \"nova-metadata-0\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " pod="openstack/nova-metadata-0" Feb 19 05:45:46 crc kubenswrapper[5012]: E0219 05:45:46.213844 5012 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="afdc318ce7e7f31c55b83d198c0056a9143debe76f4068e0b8b55a3cd789f800" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 05:45:46 crc kubenswrapper[5012]: E0219 05:45:46.218880 5012 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="afdc318ce7e7f31c55b83d198c0056a9143debe76f4068e0b8b55a3cd789f800" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 05:45:46 crc kubenswrapper[5012]: E0219 05:45:46.221784 5012 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="afdc318ce7e7f31c55b83d198c0056a9143debe76f4068e0b8b55a3cd789f800" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 05:45:46 crc kubenswrapper[5012]: E0219 05:45:46.221862 5012 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="fb843c15-c78d-4b5e-91b3-31ec0befd9fe" containerName="nova-scheduler-scheduler" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.279864 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.392838 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.591677 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-config-data\") pod \"01803024-8b09-46a8-849a-7129e5734fc5\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.593510 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-run-httpd\") pod \"01803024-8b09-46a8-849a-7129e5734fc5\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.593615 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-scripts\") pod \"01803024-8b09-46a8-849a-7129e5734fc5\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.594894 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "01803024-8b09-46a8-849a-7129e5734fc5" (UID: "01803024-8b09-46a8-849a-7129e5734fc5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.597061 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2zr6\" (UniqueName: \"kubernetes.io/projected/01803024-8b09-46a8-849a-7129e5734fc5-kube-api-access-v2zr6\") pod \"01803024-8b09-46a8-849a-7129e5734fc5\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.597712 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-combined-ca-bundle\") pod \"01803024-8b09-46a8-849a-7129e5734fc5\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.597760 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-sg-core-conf-yaml\") pod \"01803024-8b09-46a8-849a-7129e5734fc5\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.597982 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-log-httpd\") pod \"01803024-8b09-46a8-849a-7129e5734fc5\" (UID: \"01803024-8b09-46a8-849a-7129e5734fc5\") " Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.598867 5012 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.602746 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "01803024-8b09-46a8-849a-7129e5734fc5" (UID: "01803024-8b09-46a8-849a-7129e5734fc5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.603426 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01803024-8b09-46a8-849a-7129e5734fc5-kube-api-access-v2zr6" (OuterVolumeSpecName: "kube-api-access-v2zr6") pod "01803024-8b09-46a8-849a-7129e5734fc5" (UID: "01803024-8b09-46a8-849a-7129e5734fc5"). InnerVolumeSpecName "kube-api-access-v2zr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.603529 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-scripts" (OuterVolumeSpecName: "scripts") pod "01803024-8b09-46a8-849a-7129e5734fc5" (UID: "01803024-8b09-46a8-849a-7129e5734fc5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.604675 5012 generic.go:334] "Generic (PLEG): container finished" podID="01803024-8b09-46a8-849a-7129e5734fc5" containerID="2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832" exitCode=0 Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.604852 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01803024-8b09-46a8-849a-7129e5734fc5","Type":"ContainerDied","Data":"2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832"} Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.604907 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01803024-8b09-46a8-849a-7129e5734fc5","Type":"ContainerDied","Data":"1a4c3e21ec02a97624b92d231eadc367c369bbe32cd7bae830f477cfab60fbad"} Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.604943 5012 scope.go:117] "RemoveContainer" containerID="f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.605167 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.640746 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "01803024-8b09-46a8-849a-7129e5734fc5" (UID: "01803024-8b09-46a8-849a-7129e5734fc5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.676121 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01803024-8b09-46a8-849a-7129e5734fc5" (UID: "01803024-8b09-46a8-849a-7129e5734fc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.700707 5012 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01803024-8b09-46a8-849a-7129e5734fc5-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.700811 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.700876 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2zr6\" (UniqueName: \"kubernetes.io/projected/01803024-8b09-46a8-849a-7129e5734fc5-kube-api-access-v2zr6\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.700949 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.701012 5012 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.714346 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f259e859-c226-472e-85d3-8b5a9c7ba66a" path="/var/lib/kubelet/pods/f259e859-c226-472e-85d3-8b5a9c7ba66a/volumes" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.732562 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-config-data" (OuterVolumeSpecName: "config-data") pod "01803024-8b09-46a8-849a-7129e5734fc5" (UID: "01803024-8b09-46a8-849a-7129e5734fc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.804109 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01803024-8b09-46a8-849a-7129e5734fc5-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.807719 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.831631 5012 scope.go:117] "RemoveContainer" containerID="e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b" Feb 19 05:45:46 crc kubenswrapper[5012]: W0219 05:45:46.841169 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dbca55d_fe7e_4a74_a25c_8c495eb29e3b.slice/crio-d5151fe8a2179cf3ec35bc35e025b6f051659ce0400cbb15c53f153c34909628 WatchSource:0}: Error finding container d5151fe8a2179cf3ec35bc35e025b6f051659ce0400cbb15c53f153c34909628: Status 404 returned error can't find the container with id d5151fe8a2179cf3ec35bc35e025b6f051659ce0400cbb15c53f153c34909628 Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.883408 5012 scope.go:117] "RemoveContainer" containerID="2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.940720 5012 scope.go:117] "RemoveContainer" containerID="9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.958839 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.972363 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.989573 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:46 crc kubenswrapper[5012]: I0219 05:45:46.999084 5012 scope.go:117] "RemoveContainer" containerID="f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5" Feb 19 05:45:46 crc kubenswrapper[5012]: E0219 05:45:46.999900 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5\": container with ID starting with f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5 not found: ID does not exist" containerID="f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.000002 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5"} err="failed to get container status \"f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5\": rpc error: code = NotFound desc = could not find container \"f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5\": container with ID starting with f1dbd1b26aaf0144929740cb467be1e57629658970af650eda2a06fce9113ca5 not found: ID does not exist" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.000099 5012 scope.go:117] "RemoveContainer" containerID="e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b" Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.000620 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b\": container with ID starting with e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b not found: ID does not exist" containerID="e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.000664 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b"} err="failed to get container status \"e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b\": rpc error: code = NotFound desc = could not find container \"e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b\": container with ID starting with e3b820c3eca99a6932fe0150b7f70db46f68002f3a3019e2d376a5f2522f346b not found: ID does not exist" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.000694 5012 scope.go:117] "RemoveContainer" containerID="2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832" Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.001002 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832\": container with ID starting with 2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832 not found: ID does not exist" containerID="2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.001044 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832"} err="failed to get container status \"2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832\": rpc error: code = NotFound desc = could not find container \"2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832\": container with ID starting with 2b741cf6a82f25a97e5298fa571f40d9a7a0cd9740ab89bfcec1dc69ddc2b832 not found: ID does not exist" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.001074 5012 scope.go:117] "RemoveContainer" containerID="9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5" Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.002984 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5\": container with ID starting with 9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5 not found: ID does not exist" containerID="9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.003009 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5"} err="failed to get container status \"9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5\": rpc error: code = NotFound desc = could not find container \"9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5\": container with ID starting with 9002acae13699d07b65e2198b1b2bfd440af0660f001677a1517b9a62ff63db5 not found: ID does not exist" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.006559 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-config-data\") pod \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.006595 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-combined-ca-bundle\") pod \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.006660 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szzzk\" (UniqueName: \"kubernetes.io/projected/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-kube-api-access-szzzk\") pod \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.006687 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-scripts\") pod \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\" (UID: \"7fc8fbb1-0e37-419f-86e0-6ce8db99225d\") " Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.008850 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.009319 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="ceilometer-notification-agent" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.009337 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="ceilometer-notification-agent" Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.009352 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc8fbb1-0e37-419f-86e0-6ce8db99225d" containerName="nova-cell1-conductor-db-sync" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.009360 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc8fbb1-0e37-419f-86e0-6ce8db99225d" containerName="nova-cell1-conductor-db-sync" Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.009386 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="sg-core" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.009394 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="sg-core" Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.009410 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="ceilometer-central-agent" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.009416 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="ceilometer-central-agent" Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.009430 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="proxy-httpd" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.009436 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="proxy-httpd" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.009613 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc8fbb1-0e37-419f-86e0-6ce8db99225d" containerName="nova-cell1-conductor-db-sync" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.009628 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="sg-core" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.009643 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="ceilometer-central-agent" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.009654 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="proxy-httpd" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.009661 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="01803024-8b09-46a8-849a-7129e5734fc5" containerName="ceilometer-notification-agent" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.011588 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.017118 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.017967 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.018652 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.020550 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.027773 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-kube-api-access-szzzk" (OuterVolumeSpecName: "kube-api-access-szzzk") pod "7fc8fbb1-0e37-419f-86e0-6ce8db99225d" (UID: "7fc8fbb1-0e37-419f-86e0-6ce8db99225d"). InnerVolumeSpecName "kube-api-access-szzzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.034249 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-scripts" (OuterVolumeSpecName: "scripts") pod "7fc8fbb1-0e37-419f-86e0-6ce8db99225d" (UID: "7fc8fbb1-0e37-419f-86e0-6ce8db99225d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.054463 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-config-data" (OuterVolumeSpecName: "config-data") pod "7fc8fbb1-0e37-419f-86e0-6ce8db99225d" (UID: "7fc8fbb1-0e37-419f-86e0-6ce8db99225d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.071897 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fc8fbb1-0e37-419f-86e0-6ce8db99225d" (UID: "7fc8fbb1-0e37-419f-86e0-6ce8db99225d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.109416 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-log-httpd\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.109502 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-run-httpd\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.109536 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.109630 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.109755 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.109816 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpktb\" (UniqueName: \"kubernetes.io/projected/27805340-8269-4d8f-9183-b1cb339fea39-kube-api-access-vpktb\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.109898 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-config-data\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.109916 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-scripts\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.110177 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.110201 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.110229 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szzzk\" (UniqueName: \"kubernetes.io/projected/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-kube-api-access-szzzk\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.110238 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fc8fbb1-0e37-419f-86e0-6ce8db99225d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.158697 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.211400 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-config-data\") pod \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.211826 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzdm8\" (UniqueName: \"kubernetes.io/projected/b9c5be03-d36f-4a6a-8359-535ed4ad505d-kube-api-access-rzdm8\") pod \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.211932 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9c5be03-d36f-4a6a-8359-535ed4ad505d-logs\") pod \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.212247 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-combined-ca-bundle\") pod \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\" (UID: \"b9c5be03-d36f-4a6a-8359-535ed4ad505d\") " Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.212980 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9c5be03-d36f-4a6a-8359-535ed4ad505d-logs" (OuterVolumeSpecName: "logs") pod "b9c5be03-d36f-4a6a-8359-535ed4ad505d" (UID: "b9c5be03-d36f-4a6a-8359-535ed4ad505d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.213780 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.213876 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpktb\" (UniqueName: \"kubernetes.io/projected/27805340-8269-4d8f-9183-b1cb339fea39-kube-api-access-vpktb\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.213906 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-scripts\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.213944 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-config-data\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.214029 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-log-httpd\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.214065 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-run-httpd\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.214109 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.214173 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.214265 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9c5be03-d36f-4a6a-8359-535ed4ad505d-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.215700 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-log-httpd\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.215783 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9c5be03-d36f-4a6a-8359-535ed4ad505d-kube-api-access-rzdm8" (OuterVolumeSpecName: "kube-api-access-rzdm8") pod "b9c5be03-d36f-4a6a-8359-535ed4ad505d" (UID: "b9c5be03-d36f-4a6a-8359-535ed4ad505d"). InnerVolumeSpecName "kube-api-access-rzdm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.216442 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-run-httpd\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.218130 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.218842 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-scripts\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.219447 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.247477 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-config-data\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.247981 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.250518 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpktb\" (UniqueName: \"kubernetes.io/projected/27805340-8269-4d8f-9183-b1cb339fea39-kube-api-access-vpktb\") pod \"ceilometer-0\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.275640 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9c5be03-d36f-4a6a-8359-535ed4ad505d" (UID: "b9c5be03-d36f-4a6a-8359-535ed4ad505d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.283813 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-config-data" (OuterVolumeSpecName: "config-data") pod "b9c5be03-d36f-4a6a-8359-535ed4ad505d" (UID: "b9c5be03-d36f-4a6a-8359-535ed4ad505d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.315111 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzdm8\" (UniqueName: \"kubernetes.io/projected/b9c5be03-d36f-4a6a-8359-535ed4ad505d-kube-api-access-rzdm8\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.315143 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.315152 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9c5be03-d36f-4a6a-8359-535ed4ad505d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.334037 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.658992 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nbn8z" event={"ID":"7fc8fbb1-0e37-419f-86e0-6ce8db99225d","Type":"ContainerDied","Data":"d98f7ee44c86e6ff53f7f25188347cf80c40d13a66e4ecf4956ac175a094de8b"} Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.659059 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d98f7ee44c86e6ff53f7f25188347cf80c40d13a66e4ecf4956ac175a094de8b" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.659073 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nbn8z" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.663665 5012 generic.go:334] "Generic (PLEG): container finished" podID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerID="92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42" exitCode=0 Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.663729 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.663758 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9c5be03-d36f-4a6a-8359-535ed4ad505d","Type":"ContainerDied","Data":"92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42"} Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.663804 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b9c5be03-d36f-4a6a-8359-535ed4ad505d","Type":"ContainerDied","Data":"ecb97b91d6f4ced51237b91313e01c9556310ff6ba6362c7e6f2808ce0a033d1"} Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.663827 5012 scope.go:117] "RemoveContainer" containerID="92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.692615 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.694022 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerName="nova-api-log" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.694048 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerName="nova-api-log" Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.694091 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerName="nova-api-api" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.694099 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerName="nova-api-api" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.694724 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerName="nova-api-api" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.694794 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" containerName="nova-api-log" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.696248 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.705531 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b","Type":"ContainerStarted","Data":"ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3"} Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.705707 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b","Type":"ContainerStarted","Data":"6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b"} Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.705835 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b","Type":"ContainerStarted","Data":"d5151fe8a2179cf3ec35bc35e025b6f051659ce0400cbb15c53f153c34909628"} Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.711439 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.713390 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.716405 5012 scope.go:117] "RemoveContainer" containerID="bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.761331 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.761311305 podStartE2EDuration="2.761311305s" podCreationTimestamp="2026-02-19 05:45:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:45:47.746737041 +0000 UTC m=+1243.780059610" watchObservedRunningTime="2026-02-19 05:45:47.761311305 +0000 UTC m=+1243.794633874" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.779169 5012 scope.go:117] "RemoveContainer" containerID="92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42" Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.783778 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42\": container with ID starting with 92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42 not found: ID does not exist" containerID="92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.783828 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42"} err="failed to get container status \"92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42\": rpc error: code = NotFound desc = could not find container \"92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42\": container with ID starting with 92c648c32255964cda76f91854b828f66adc8a354325b8f0379f61914a42fe42 not found: ID does not exist" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.783852 5012 scope.go:117] "RemoveContainer" containerID="bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b" Feb 19 05:45:47 crc kubenswrapper[5012]: E0219 05:45:47.784280 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b\": container with ID starting with bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b not found: ID does not exist" containerID="bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.784340 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b"} err="failed to get container status \"bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b\": rpc error: code = NotFound desc = could not find container \"bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b\": container with ID starting with bea5910543a7e83a30e67e920aa6feb6633448b0b9eb2f222ebb6dd8d320eb6b not found: ID does not exist" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.796611 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.827470 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.838554 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aceef718-9d1c-441d-bf1b-92c0a6831def-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"aceef718-9d1c-441d-bf1b-92c0a6831def\") " pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.838778 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aceef718-9d1c-441d-bf1b-92c0a6831def-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"aceef718-9d1c-441d-bf1b-92c0a6831def\") " pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.838820 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkj22\" (UniqueName: \"kubernetes.io/projected/aceef718-9d1c-441d-bf1b-92c0a6831def-kube-api-access-gkj22\") pod \"nova-cell1-conductor-0\" (UID: \"aceef718-9d1c-441d-bf1b-92c0a6831def\") " pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.843797 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.846247 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.850742 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.860420 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.877347 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.941848 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aceef718-9d1c-441d-bf1b-92c0a6831def-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"aceef718-9d1c-441d-bf1b-92c0a6831def\") " pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.942343 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aceef718-9d1c-441d-bf1b-92c0a6831def-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"aceef718-9d1c-441d-bf1b-92c0a6831def\") " pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.942539 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkj22\" (UniqueName: \"kubernetes.io/projected/aceef718-9d1c-441d-bf1b-92c0a6831def-kube-api-access-gkj22\") pod \"nova-cell1-conductor-0\" (UID: \"aceef718-9d1c-441d-bf1b-92c0a6831def\") " pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.949422 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aceef718-9d1c-441d-bf1b-92c0a6831def-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"aceef718-9d1c-441d-bf1b-92c0a6831def\") " pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.951961 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aceef718-9d1c-441d-bf1b-92c0a6831def-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"aceef718-9d1c-441d-bf1b-92c0a6831def\") " pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:47 crc kubenswrapper[5012]: I0219 05:45:47.964014 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkj22\" (UniqueName: \"kubernetes.io/projected/aceef718-9d1c-441d-bf1b-92c0a6831def-kube-api-access-gkj22\") pod \"nova-cell1-conductor-0\" (UID: \"aceef718-9d1c-441d-bf1b-92c0a6831def\") " pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.041964 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.044600 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-config-data\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.044684 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-logs\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.044729 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9cwp\" (UniqueName: \"kubernetes.io/projected/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-kube-api-access-z9cwp\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.044766 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.152985 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-config-data\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.153122 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-logs\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.153206 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9cwp\" (UniqueName: \"kubernetes.io/projected/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-kube-api-access-z9cwp\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.153287 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.154184 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-logs\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.162409 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-config-data\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.163661 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.190822 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9cwp\" (UniqueName: \"kubernetes.io/projected/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-kube-api-access-z9cwp\") pod \"nova-api-0\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.476034 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.604630 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.718436 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01803024-8b09-46a8-849a-7129e5734fc5" path="/var/lib/kubelet/pods/01803024-8b09-46a8-849a-7129e5734fc5/volumes" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.719571 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9c5be03-d36f-4a6a-8359-535ed4ad505d" path="/var/lib/kubelet/pods/b9c5be03-d36f-4a6a-8359-535ed4ad505d/volumes" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.724293 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27805340-8269-4d8f-9183-b1cb339fea39","Type":"ContainerStarted","Data":"b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817"} Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.724386 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27805340-8269-4d8f-9183-b1cb339fea39","Type":"ContainerStarted","Data":"f47fd647e08594f727d5b6f46aed06b94634943796a315baee684b47c07fa5fe"} Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.725544 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"aceef718-9d1c-441d-bf1b-92c0a6831def","Type":"ContainerStarted","Data":"16414366c7851ccc17ca0f98a4981455028596ffb91ab593396dd0823211ab8f"} Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.727855 5012 generic.go:334] "Generic (PLEG): container finished" podID="fb843c15-c78d-4b5e-91b3-31ec0befd9fe" containerID="afdc318ce7e7f31c55b83d198c0056a9143debe76f4068e0b8b55a3cd789f800" exitCode=0 Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.727963 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fb843c15-c78d-4b5e-91b3-31ec0befd9fe","Type":"ContainerDied","Data":"afdc318ce7e7f31c55b83d198c0056a9143debe76f4068e0b8b55a3cd789f800"} Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.727983 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fb843c15-c78d-4b5e-91b3-31ec0befd9fe","Type":"ContainerDied","Data":"ee7567e98958d50555ddbca81a211daa490b9d51437c109eb4b01f873305fc7c"} Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.728099 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee7567e98958d50555ddbca81a211daa490b9d51437c109eb4b01f873305fc7c" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.765340 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.868127 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-combined-ca-bundle\") pod \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.868346 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj8q6\" (UniqueName: \"kubernetes.io/projected/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-kube-api-access-fj8q6\") pod \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.868464 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-config-data\") pod \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\" (UID: \"fb843c15-c78d-4b5e-91b3-31ec0befd9fe\") " Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.878989 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-kube-api-access-fj8q6" (OuterVolumeSpecName: "kube-api-access-fj8q6") pod "fb843c15-c78d-4b5e-91b3-31ec0befd9fe" (UID: "fb843c15-c78d-4b5e-91b3-31ec0befd9fe"). InnerVolumeSpecName "kube-api-access-fj8q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.898942 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-config-data" (OuterVolumeSpecName: "config-data") pod "fb843c15-c78d-4b5e-91b3-31ec0befd9fe" (UID: "fb843c15-c78d-4b5e-91b3-31ec0befd9fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.899632 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb843c15-c78d-4b5e-91b3-31ec0befd9fe" (UID: "fb843c15-c78d-4b5e-91b3-31ec0befd9fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.967034 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.970433 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.970464 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fj8q6\" (UniqueName: \"kubernetes.io/projected/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-kube-api-access-fj8q6\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:48 crc kubenswrapper[5012]: I0219 05:45:48.970476 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb843c15-c78d-4b5e-91b3-31ec0befd9fe-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.744781 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27805340-8269-4d8f-9183-b1cb339fea39","Type":"ContainerStarted","Data":"84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846"} Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.745081 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27805340-8269-4d8f-9183-b1cb339fea39","Type":"ContainerStarted","Data":"7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f"} Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.748864 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"aceef718-9d1c-441d-bf1b-92c0a6831def","Type":"ContainerStarted","Data":"df6288f11c34ee3d8f152b2cd2ea6131e77ec5aaa909cdee0c8b10ce416c10dc"} Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.749520 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.752387 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06","Type":"ContainerStarted","Data":"7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a"} Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.752454 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06","Type":"ContainerStarted","Data":"e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721"} Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.752475 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06","Type":"ContainerStarted","Data":"cf097bcda3c824344ff950b961502394eb3df8a35294e9a44c6a1c8d3d85a714"} Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.752406 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.785362 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.785341605 podStartE2EDuration="2.785341605s" podCreationTimestamp="2026-02-19 05:45:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:45:49.776410218 +0000 UTC m=+1245.809732817" watchObservedRunningTime="2026-02-19 05:45:49.785341605 +0000 UTC m=+1245.818664184" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.810096 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.809293908 podStartE2EDuration="2.809293908s" podCreationTimestamp="2026-02-19 05:45:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:45:49.803359474 +0000 UTC m=+1245.836682043" watchObservedRunningTime="2026-02-19 05:45:49.809293908 +0000 UTC m=+1245.842616487" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.835446 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.849342 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.874285 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:45:49 crc kubenswrapper[5012]: E0219 05:45:49.874791 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb843c15-c78d-4b5e-91b3-31ec0befd9fe" containerName="nova-scheduler-scheduler" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.874807 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb843c15-c78d-4b5e-91b3-31ec0befd9fe" containerName="nova-scheduler-scheduler" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.875006 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb843c15-c78d-4b5e-91b3-31ec0befd9fe" containerName="nova-scheduler-scheduler" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.875728 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.884688 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.933254 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 05:45:49 crc kubenswrapper[5012]: I0219 05:45:49.933326 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.004850 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj24n\" (UniqueName: \"kubernetes.io/projected/96352ff3-accb-4fd1-8fa4-eec10f340eaf-kube-api-access-gj24n\") pod \"nova-scheduler-0\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.004909 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.004948 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-config-data\") pod \"nova-scheduler-0\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.107564 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj24n\" (UniqueName: \"kubernetes.io/projected/96352ff3-accb-4fd1-8fa4-eec10f340eaf-kube-api-access-gj24n\") pod \"nova-scheduler-0\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.107655 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.107725 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-config-data\") pod \"nova-scheduler-0\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.115511 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-config-data\") pod \"nova-scheduler-0\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.119888 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.125153 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj24n\" (UniqueName: \"kubernetes.io/projected/96352ff3-accb-4fd1-8fa4-eec10f340eaf-kube-api-access-gj24n\") pod \"nova-scheduler-0\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " pod="openstack/nova-scheduler-0" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.214891 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.661908 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:45:50 crc kubenswrapper[5012]: W0219 05:45:50.665773 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96352ff3_accb_4fd1_8fa4_eec10f340eaf.slice/crio-0efe32e1979dd31ba39916aaeb5dec48666e72ade534d761ecc6b79a3666cbef WatchSource:0}: Error finding container 0efe32e1979dd31ba39916aaeb5dec48666e72ade534d761ecc6b79a3666cbef: Status 404 returned error can't find the container with id 0efe32e1979dd31ba39916aaeb5dec48666e72ade534d761ecc6b79a3666cbef Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.714587 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb843c15-c78d-4b5e-91b3-31ec0befd9fe" path="/var/lib/kubelet/pods/fb843c15-c78d-4b5e-91b3-31ec0befd9fe/volumes" Feb 19 05:45:50 crc kubenswrapper[5012]: I0219 05:45:50.766000 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96352ff3-accb-4fd1-8fa4-eec10f340eaf","Type":"ContainerStarted","Data":"0efe32e1979dd31ba39916aaeb5dec48666e72ade534d761ecc6b79a3666cbef"} Feb 19 05:45:51 crc kubenswrapper[5012]: I0219 05:45:51.280581 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 05:45:51 crc kubenswrapper[5012]: I0219 05:45:51.280995 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 05:45:51 crc kubenswrapper[5012]: I0219 05:45:51.781796 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27805340-8269-4d8f-9183-b1cb339fea39","Type":"ContainerStarted","Data":"fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f"} Feb 19 05:45:51 crc kubenswrapper[5012]: I0219 05:45:51.783152 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 05:45:51 crc kubenswrapper[5012]: I0219 05:45:51.785174 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96352ff3-accb-4fd1-8fa4-eec10f340eaf","Type":"ContainerStarted","Data":"2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c"} Feb 19 05:45:51 crc kubenswrapper[5012]: I0219 05:45:51.807859 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.823430095 podStartE2EDuration="5.807818758s" podCreationTimestamp="2026-02-19 05:45:46 +0000 UTC" firstStartedPulling="2026-02-19 05:45:47.813806273 +0000 UTC m=+1243.847128842" lastFinishedPulling="2026-02-19 05:45:50.798194926 +0000 UTC m=+1246.831517505" observedRunningTime="2026-02-19 05:45:51.806842915 +0000 UTC m=+1247.840165514" watchObservedRunningTime="2026-02-19 05:45:51.807818758 +0000 UTC m=+1247.841141367" Feb 19 05:45:51 crc kubenswrapper[5012]: I0219 05:45:51.836127 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.8361057670000003 podStartE2EDuration="2.836105767s" podCreationTimestamp="2026-02-19 05:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:45:51.827579399 +0000 UTC m=+1247.860901988" watchObservedRunningTime="2026-02-19 05:45:51.836105767 +0000 UTC m=+1247.869428336" Feb 19 05:45:53 crc kubenswrapper[5012]: I0219 05:45:53.079284 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 19 05:45:55 crc kubenswrapper[5012]: I0219 05:45:55.215762 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 05:45:56 crc kubenswrapper[5012]: I0219 05:45:56.280535 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 05:45:56 crc kubenswrapper[5012]: I0219 05:45:56.280634 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 05:45:57 crc kubenswrapper[5012]: I0219 05:45:57.295545 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 05:45:57 crc kubenswrapper[5012]: I0219 05:45:57.295634 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 05:45:58 crc kubenswrapper[5012]: I0219 05:45:58.477064 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 05:45:58 crc kubenswrapper[5012]: I0219 05:45:58.477178 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 05:45:59 crc kubenswrapper[5012]: I0219 05:45:59.559501 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.217:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 05:45:59 crc kubenswrapper[5012]: I0219 05:45:59.560604 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.217:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 05:46:00 crc kubenswrapper[5012]: I0219 05:46:00.215570 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 05:46:00 crc kubenswrapper[5012]: I0219 05:46:00.270930 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 05:46:00 crc kubenswrapper[5012]: I0219 05:46:00.972176 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 05:46:05 crc kubenswrapper[5012]: I0219 05:46:05.926353 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.007299 5012 generic.go:334] "Generic (PLEG): container finished" podID="d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae" containerID="602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc" exitCode=137 Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.007425 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae","Type":"ContainerDied","Data":"602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc"} Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.007429 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.007480 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae","Type":"ContainerDied","Data":"f59fd4a4ac42380c62e5cbec861422215a50132042a18819d7c99682128821ac"} Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.007517 5012 scope.go:117] "RemoveContainer" containerID="602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.030067 5012 scope.go:117] "RemoveContainer" containerID="602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc" Feb 19 05:46:06 crc kubenswrapper[5012]: E0219 05:46:06.031423 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc\": container with ID starting with 602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc not found: ID does not exist" containerID="602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.031476 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc"} err="failed to get container status \"602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc\": rpc error: code = NotFound desc = could not find container \"602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc\": container with ID starting with 602de320c570328721b1a3f9ed4516f079691a06a5a4f66cb8b1ceb439f882cc not found: ID does not exist" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.090451 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-combined-ca-bundle\") pod \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.090613 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzcfh\" (UniqueName: \"kubernetes.io/projected/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-kube-api-access-kzcfh\") pod \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.090656 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-config-data\") pod \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\" (UID: \"d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae\") " Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.098619 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-kube-api-access-kzcfh" (OuterVolumeSpecName: "kube-api-access-kzcfh") pod "d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae" (UID: "d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae"). InnerVolumeSpecName "kube-api-access-kzcfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.128715 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae" (UID: "d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.154363 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-config-data" (OuterVolumeSpecName: "config-data") pod "d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae" (UID: "d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.194192 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzcfh\" (UniqueName: \"kubernetes.io/projected/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-kube-api-access-kzcfh\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.194225 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.194238 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.287192 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.287841 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.300676 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.380702 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.398426 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.413465 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 05:46:06 crc kubenswrapper[5012]: E0219 05:46:06.414191 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.414221 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.414609 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.415787 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.419085 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.419532 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.419759 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.424099 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.502191 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.502440 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.502503 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.502606 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrvq4\" (UniqueName: \"kubernetes.io/projected/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-kube-api-access-vrvq4\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.502692 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.605927 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.606080 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrvq4\" (UniqueName: \"kubernetes.io/projected/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-kube-api-access-vrvq4\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.606167 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.606378 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.606469 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.613761 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.613900 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.616270 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.617613 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.638708 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrvq4\" (UniqueName: \"kubernetes.io/projected/661e04e4-4ba2-4ea0-9ba6-3af2949e7e21-kube-api-access-vrvq4\") pod \"nova-cell1-novncproxy-0\" (UID: \"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.723917 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae" path="/var/lib/kubelet/pods/d5883665-a6ac-4d4b-ab72-c3ea7eaad6ae/volumes" Feb 19 05:46:06 crc kubenswrapper[5012]: I0219 05:46:06.739398 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:07 crc kubenswrapper[5012]: I0219 05:46:07.043772 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 05:46:07 crc kubenswrapper[5012]: I0219 05:46:07.058276 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.047332 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21","Type":"ContainerStarted","Data":"61755fb4f9526f0f09c3360af6043a794ad06789527403b495768dabab0f4b32"} Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.047931 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"661e04e4-4ba2-4ea0-9ba6-3af2949e7e21","Type":"ContainerStarted","Data":"b11a2b0cf303cbc48d43192ca58dcd19f09004d496e641cb741e046ba5838102"} Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.081497 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.081473263 podStartE2EDuration="2.081473263s" podCreationTimestamp="2026-02-19 05:46:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:46:08.069163753 +0000 UTC m=+1264.102486352" watchObservedRunningTime="2026-02-19 05:46:08.081473263 +0000 UTC m=+1264.114795842" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.487449 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.487860 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.488356 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.488437 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.497266 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.499799 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.740702 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85446bf977-vzlgl"] Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.743186 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.772031 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85446bf977-vzlgl"] Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.880883 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-config\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.880952 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb4s2\" (UniqueName: \"kubernetes.io/projected/0ee4ae6f-65e3-4467-8302-54381eeebd5a-kube-api-access-nb4s2\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.881100 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-nb\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.881134 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-swift-storage-0\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.881156 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-svc\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.881219 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-sb\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.983663 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-nb\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.983751 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-swift-storage-0\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.983807 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-svc\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.983914 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-sb\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.984250 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-config\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.984333 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb4s2\" (UniqueName: \"kubernetes.io/projected/0ee4ae6f-65e3-4467-8302-54381eeebd5a-kube-api-access-nb4s2\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.985489 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-nb\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.985076 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-sb\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.985171 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-swift-storage-0\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.985361 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-config\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:08 crc kubenswrapper[5012]: I0219 05:46:08.985391 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-svc\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:09 crc kubenswrapper[5012]: I0219 05:46:09.012156 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb4s2\" (UniqueName: \"kubernetes.io/projected/0ee4ae6f-65e3-4467-8302-54381eeebd5a-kube-api-access-nb4s2\") pod \"dnsmasq-dns-85446bf977-vzlgl\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:09 crc kubenswrapper[5012]: I0219 05:46:09.080285 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:09 crc kubenswrapper[5012]: I0219 05:46:09.661753 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85446bf977-vzlgl"] Feb 19 05:46:10 crc kubenswrapper[5012]: I0219 05:46:10.066611 5012 generic.go:334] "Generic (PLEG): container finished" podID="0ee4ae6f-65e3-4467-8302-54381eeebd5a" containerID="26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488" exitCode=0 Feb 19 05:46:10 crc kubenswrapper[5012]: I0219 05:46:10.066666 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" event={"ID":"0ee4ae6f-65e3-4467-8302-54381eeebd5a","Type":"ContainerDied","Data":"26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488"} Feb 19 05:46:10 crc kubenswrapper[5012]: I0219 05:46:10.066932 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" event={"ID":"0ee4ae6f-65e3-4467-8302-54381eeebd5a","Type":"ContainerStarted","Data":"76c330a33b78602a2e427fa0cfc346da48f97fdaa4760b156caa3f21371da964"} Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.029840 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.030582 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="ceilometer-central-agent" containerID="cri-o://b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817" gracePeriod=30 Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.031554 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="proxy-httpd" containerID="cri-o://fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f" gracePeriod=30 Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.031616 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="ceilometer-notification-agent" containerID="cri-o://7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f" gracePeriod=30 Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.031651 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="sg-core" containerID="cri-o://84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846" gracePeriod=30 Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.040960 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.215:3000/\": EOF" Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.080773 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" event={"ID":"0ee4ae6f-65e3-4467-8302-54381eeebd5a","Type":"ContainerStarted","Data":"d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498"} Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.081048 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.114456 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" podStartSLOduration=3.114423969 podStartE2EDuration="3.114423969s" podCreationTimestamp="2026-02-19 05:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:46:11.099759912 +0000 UTC m=+1267.133082481" watchObservedRunningTime="2026-02-19 05:46:11.114423969 +0000 UTC m=+1267.147746578" Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.549935 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.550382 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerName="nova-api-log" containerID="cri-o://e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721" gracePeriod=30 Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.550527 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerName="nova-api-api" containerID="cri-o://7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a" gracePeriod=30 Feb 19 05:46:11 crc kubenswrapper[5012]: I0219 05:46:11.740926 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.102581 5012 generic.go:334] "Generic (PLEG): container finished" podID="27805340-8269-4d8f-9183-b1cb339fea39" containerID="fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f" exitCode=0 Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.102610 5012 generic.go:334] "Generic (PLEG): container finished" podID="27805340-8269-4d8f-9183-b1cb339fea39" containerID="84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846" exitCode=2 Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.102622 5012 generic.go:334] "Generic (PLEG): container finished" podID="27805340-8269-4d8f-9183-b1cb339fea39" containerID="b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817" exitCode=0 Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.102686 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27805340-8269-4d8f-9183-b1cb339fea39","Type":"ContainerDied","Data":"fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f"} Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.102735 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27805340-8269-4d8f-9183-b1cb339fea39","Type":"ContainerDied","Data":"84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846"} Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.102751 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27805340-8269-4d8f-9183-b1cb339fea39","Type":"ContainerDied","Data":"b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817"} Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.105338 5012 generic.go:334] "Generic (PLEG): container finished" podID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerID="e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721" exitCode=143 Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.105399 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06","Type":"ContainerDied","Data":"e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721"} Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.899545 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.963552 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-combined-ca-bundle\") pod \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.963672 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-config-data\") pod \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.963707 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9cwp\" (UniqueName: \"kubernetes.io/projected/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-kube-api-access-z9cwp\") pod \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.963863 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-logs\") pod \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\" (UID: \"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06\") " Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.964381 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-logs" (OuterVolumeSpecName: "logs") pod "1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" (UID: "1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:46:12 crc kubenswrapper[5012]: I0219 05:46:12.970293 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-kube-api-access-z9cwp" (OuterVolumeSpecName: "kube-api-access-z9cwp") pod "1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" (UID: "1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06"). InnerVolumeSpecName "kube-api-access-z9cwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.001059 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-config-data" (OuterVolumeSpecName: "config-data") pod "1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" (UID: "1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.016682 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" (UID: "1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.066075 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9cwp\" (UniqueName: \"kubernetes.io/projected/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-kube-api-access-z9cwp\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.066247 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.066360 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.066463 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.116537 5012 generic.go:334] "Generic (PLEG): container finished" podID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerID="7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a" exitCode=0 Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.116577 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06","Type":"ContainerDied","Data":"7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a"} Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.116601 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06","Type":"ContainerDied","Data":"cf097bcda3c824344ff950b961502394eb3df8a35294e9a44c6a1c8d3d85a714"} Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.116602 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.116677 5012 scope.go:117] "RemoveContainer" containerID="7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.144521 5012 scope.go:117] "RemoveContainer" containerID="e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.187947 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.194069 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.220219 5012 scope.go:117] "RemoveContainer" containerID="7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.220416 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:13 crc kubenswrapper[5012]: E0219 05:46:13.220889 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerName="nova-api-api" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.220903 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerName="nova-api-api" Feb 19 05:46:13 crc kubenswrapper[5012]: E0219 05:46:13.220915 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerName="nova-api-log" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.220922 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerName="nova-api-log" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.221188 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerName="nova-api-log" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.221242 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" containerName="nova-api-api" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.222847 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.228327 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 05:46:13 crc kubenswrapper[5012]: E0219 05:46:13.228474 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a\": container with ID starting with 7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a not found: ID does not exist" containerID="7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.228513 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a"} err="failed to get container status \"7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a\": rpc error: code = NotFound desc = could not find container \"7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a\": container with ID starting with 7b646691f23b9411f8ea11db276c4b4c21b77580d379a97412f5fa9affffd60a not found: ID does not exist" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.228540 5012 scope.go:117] "RemoveContainer" containerID="e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.228702 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.228996 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 05:46:13 crc kubenswrapper[5012]: E0219 05:46:13.242560 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721\": container with ID starting with e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721 not found: ID does not exist" containerID="e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.242637 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721"} err="failed to get container status \"e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721\": rpc error: code = NotFound desc = could not find container \"e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721\": container with ID starting with e20b1395904faf9203e59914043c529ecf43bc9e71f8d72e56cbf30339e73721 not found: ID does not exist" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.255454 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.391072 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-public-tls-certs\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.391124 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc58982d-c141-4de8-bf5b-1669db2facb1-logs\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.391177 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djdp5\" (UniqueName: \"kubernetes.io/projected/bc58982d-c141-4de8-bf5b-1669db2facb1-kube-api-access-djdp5\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.391582 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.391632 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.391940 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-config-data\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.494434 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc58982d-c141-4de8-bf5b-1669db2facb1-logs\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.494908 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djdp5\" (UniqueName: \"kubernetes.io/projected/bc58982d-c141-4de8-bf5b-1669db2facb1-kube-api-access-djdp5\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.495109 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.495142 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.495134 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc58982d-c141-4de8-bf5b-1669db2facb1-logs\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.495502 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-config-data\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.496086 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-public-tls-certs\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.501077 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-config-data\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.501783 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.502076 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-public-tls-certs\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.502904 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.516258 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djdp5\" (UniqueName: \"kubernetes.io/projected/bc58982d-c141-4de8-bf5b-1669db2facb1-kube-api-access-djdp5\") pod \"nova-api-0\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " pod="openstack/nova-api-0" Feb 19 05:46:13 crc kubenswrapper[5012]: I0219 05:46:13.556574 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:46:14 crc kubenswrapper[5012]: I0219 05:46:14.104710 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:14 crc kubenswrapper[5012]: I0219 05:46:14.130062 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc58982d-c141-4de8-bf5b-1669db2facb1","Type":"ContainerStarted","Data":"04fe4ed95fee83c7e2c8336e973329811054a21e11bbed885171027e8406c6c8"} Feb 19 05:46:14 crc kubenswrapper[5012]: I0219 05:46:14.431128 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:46:14 crc kubenswrapper[5012]: I0219 05:46:14.431617 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:46:14 crc kubenswrapper[5012]: I0219 05:46:14.723616 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06" path="/var/lib/kubelet/pods/1ea29a45-9b27-48aa-a5c3-0ff5f83d3a06/volumes" Feb 19 05:46:14 crc kubenswrapper[5012]: I0219 05:46:14.965276 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.128052 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-log-httpd\") pod \"27805340-8269-4d8f-9183-b1cb339fea39\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.128182 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-config-data\") pod \"27805340-8269-4d8f-9183-b1cb339fea39\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.128504 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "27805340-8269-4d8f-9183-b1cb339fea39" (UID: "27805340-8269-4d8f-9183-b1cb339fea39"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.129133 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-combined-ca-bundle\") pod \"27805340-8269-4d8f-9183-b1cb339fea39\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.129250 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-run-httpd\") pod \"27805340-8269-4d8f-9183-b1cb339fea39\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.129405 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-scripts\") pod \"27805340-8269-4d8f-9183-b1cb339fea39\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.129551 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "27805340-8269-4d8f-9183-b1cb339fea39" (UID: "27805340-8269-4d8f-9183-b1cb339fea39"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.129563 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpktb\" (UniqueName: \"kubernetes.io/projected/27805340-8269-4d8f-9183-b1cb339fea39-kube-api-access-vpktb\") pod \"27805340-8269-4d8f-9183-b1cb339fea39\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.129638 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-ceilometer-tls-certs\") pod \"27805340-8269-4d8f-9183-b1cb339fea39\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.129686 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-sg-core-conf-yaml\") pod \"27805340-8269-4d8f-9183-b1cb339fea39\" (UID: \"27805340-8269-4d8f-9183-b1cb339fea39\") " Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.130486 5012 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.130508 5012 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27805340-8269-4d8f-9183-b1cb339fea39-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.145939 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-scripts" (OuterVolumeSpecName: "scripts") pod "27805340-8269-4d8f-9183-b1cb339fea39" (UID: "27805340-8269-4d8f-9183-b1cb339fea39"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.147286 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27805340-8269-4d8f-9183-b1cb339fea39-kube-api-access-vpktb" (OuterVolumeSpecName: "kube-api-access-vpktb") pod "27805340-8269-4d8f-9183-b1cb339fea39" (UID: "27805340-8269-4d8f-9183-b1cb339fea39"). InnerVolumeSpecName "kube-api-access-vpktb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.158559 5012 generic.go:334] "Generic (PLEG): container finished" podID="27805340-8269-4d8f-9183-b1cb339fea39" containerID="7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f" exitCode=0 Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.158624 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27805340-8269-4d8f-9183-b1cb339fea39","Type":"ContainerDied","Data":"7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f"} Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.158653 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27805340-8269-4d8f-9183-b1cb339fea39","Type":"ContainerDied","Data":"f47fd647e08594f727d5b6f46aed06b94634943796a315baee684b47c07fa5fe"} Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.158669 5012 scope.go:117] "RemoveContainer" containerID="fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.158776 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.167516 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "27805340-8269-4d8f-9183-b1cb339fea39" (UID: "27805340-8269-4d8f-9183-b1cb339fea39"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.173645 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc58982d-c141-4de8-bf5b-1669db2facb1","Type":"ContainerStarted","Data":"4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21"} Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.173698 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc58982d-c141-4de8-bf5b-1669db2facb1","Type":"ContainerStarted","Data":"898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705"} Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.205249 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.20523127 podStartE2EDuration="2.20523127s" podCreationTimestamp="2026-02-19 05:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:46:15.203875857 +0000 UTC m=+1271.237198426" watchObservedRunningTime="2026-02-19 05:46:15.20523127 +0000 UTC m=+1271.238553839" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.208585 5012 scope.go:117] "RemoveContainer" containerID="84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.225104 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "27805340-8269-4d8f-9183-b1cb339fea39" (UID: "27805340-8269-4d8f-9183-b1cb339fea39"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.232753 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.232783 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpktb\" (UniqueName: \"kubernetes.io/projected/27805340-8269-4d8f-9183-b1cb339fea39-kube-api-access-vpktb\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.232794 5012 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.232806 5012 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.251096 5012 scope.go:117] "RemoveContainer" containerID="7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.270819 5012 scope.go:117] "RemoveContainer" containerID="b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.273657 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27805340-8269-4d8f-9183-b1cb339fea39" (UID: "27805340-8269-4d8f-9183-b1cb339fea39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.294614 5012 scope.go:117] "RemoveContainer" containerID="fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f" Feb 19 05:46:15 crc kubenswrapper[5012]: E0219 05:46:15.295083 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f\": container with ID starting with fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f not found: ID does not exist" containerID="fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.295132 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f"} err="failed to get container status \"fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f\": rpc error: code = NotFound desc = could not find container \"fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f\": container with ID starting with fed2d537ac8768957100b9d0bed16cd23b50e1f7ab1ef37820ffd773e2124f4f not found: ID does not exist" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.295161 5012 scope.go:117] "RemoveContainer" containerID="84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846" Feb 19 05:46:15 crc kubenswrapper[5012]: E0219 05:46:15.296825 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846\": container with ID starting with 84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846 not found: ID does not exist" containerID="84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.296871 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846"} err="failed to get container status \"84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846\": rpc error: code = NotFound desc = could not find container \"84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846\": container with ID starting with 84084a9df6efa0cf913bd42278043be9746a740e60a70ba8b4e64b5a3aee7846 not found: ID does not exist" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.296899 5012 scope.go:117] "RemoveContainer" containerID="7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f" Feb 19 05:46:15 crc kubenswrapper[5012]: E0219 05:46:15.297235 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f\": container with ID starting with 7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f not found: ID does not exist" containerID="7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.297281 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f"} err="failed to get container status \"7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f\": rpc error: code = NotFound desc = could not find container \"7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f\": container with ID starting with 7bc289258176d77d8e4b5c9c5c27c16271d07e1287a4339d985e0bfb6f578f3f not found: ID does not exist" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.297297 5012 scope.go:117] "RemoveContainer" containerID="b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817" Feb 19 05:46:15 crc kubenswrapper[5012]: E0219 05:46:15.297666 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817\": container with ID starting with b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817 not found: ID does not exist" containerID="b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.297695 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817"} err="failed to get container status \"b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817\": rpc error: code = NotFound desc = could not find container \"b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817\": container with ID starting with b37a11426371d4afd7fc80c7185e8384298348da06a2dedacd58fe127223e817 not found: ID does not exist" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.299584 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-config-data" (OuterVolumeSpecName: "config-data") pod "27805340-8269-4d8f-9183-b1cb339fea39" (UID: "27805340-8269-4d8f-9183-b1cb339fea39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.335736 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.335790 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27805340-8269-4d8f-9183-b1cb339fea39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.567847 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.578462 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.596208 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:46:15 crc kubenswrapper[5012]: E0219 05:46:15.596657 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="proxy-httpd" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.596676 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="proxy-httpd" Feb 19 05:46:15 crc kubenswrapper[5012]: E0219 05:46:15.596702 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="ceilometer-notification-agent" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.596710 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="ceilometer-notification-agent" Feb 19 05:46:15 crc kubenswrapper[5012]: E0219 05:46:15.596730 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="sg-core" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.596739 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="sg-core" Feb 19 05:46:15 crc kubenswrapper[5012]: E0219 05:46:15.596752 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="ceilometer-central-agent" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.596761 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="ceilometer-central-agent" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.596975 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="proxy-httpd" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.596994 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="ceilometer-notification-agent" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.597014 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="sg-core" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.597037 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="27805340-8269-4d8f-9183-b1cb339fea39" containerName="ceilometer-central-agent" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.599424 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.603771 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.606864 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.607535 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.607835 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.767034 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.767130 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.767240 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-config-data\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.767288 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r57xv\" (UniqueName: \"kubernetes.io/projected/9647feae-5291-41e1-9bb4-631f661552b9-kube-api-access-r57xv\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.767417 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9647feae-5291-41e1-9bb4-631f661552b9-run-httpd\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.767465 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.767515 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9647feae-5291-41e1-9bb4-631f661552b9-log-httpd\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.767554 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-scripts\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.870832 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-config-data\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.870923 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r57xv\" (UniqueName: \"kubernetes.io/projected/9647feae-5291-41e1-9bb4-631f661552b9-kube-api-access-r57xv\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.871029 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9647feae-5291-41e1-9bb4-631f661552b9-run-httpd\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.871090 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.871159 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9647feae-5291-41e1-9bb4-631f661552b9-log-httpd\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.871206 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-scripts\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.871389 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.871460 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.871889 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9647feae-5291-41e1-9bb4-631f661552b9-run-httpd\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.873050 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9647feae-5291-41e1-9bb4-631f661552b9-log-httpd\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.877030 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.878125 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.878265 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-scripts\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.878808 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-config-data\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.886847 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9647feae-5291-41e1-9bb4-631f661552b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.892671 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r57xv\" (UniqueName: \"kubernetes.io/projected/9647feae-5291-41e1-9bb4-631f661552b9-kube-api-access-r57xv\") pod \"ceilometer-0\" (UID: \"9647feae-5291-41e1-9bb4-631f661552b9\") " pod="openstack/ceilometer-0" Feb 19 05:46:15 crc kubenswrapper[5012]: I0219 05:46:15.948965 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 05:46:16 crc kubenswrapper[5012]: I0219 05:46:16.453909 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 05:46:16 crc kubenswrapper[5012]: I0219 05:46:16.715999 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27805340-8269-4d8f-9183-b1cb339fea39" path="/var/lib/kubelet/pods/27805340-8269-4d8f-9183-b1cb339fea39/volumes" Feb 19 05:46:16 crc kubenswrapper[5012]: I0219 05:46:16.741246 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:16 crc kubenswrapper[5012]: I0219 05:46:16.771076 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.200804 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9647feae-5291-41e1-9bb4-631f661552b9","Type":"ContainerStarted","Data":"2af73644b5894679c07a4f93835a002f13a8829a9b46d3ef4965a8b10615c043"} Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.201217 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9647feae-5291-41e1-9bb4-631f661552b9","Type":"ContainerStarted","Data":"6bf60473bddc0fd9a12ac8d4f58b44cb413d42ee7a6d99476728a740e9353092"} Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.221028 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.422404 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-4t5r4"] Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.424458 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.430974 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.431079 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.450015 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4t5r4"] Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.608264 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-config-data\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.608435 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-scripts\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.608490 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64q96\" (UniqueName: \"kubernetes.io/projected/f597fc0f-7407-4f05-916c-70f7a3f145ec-kube-api-access-64q96\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.608808 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.710848 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.710926 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-config-data\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.710993 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-scripts\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.711044 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64q96\" (UniqueName: \"kubernetes.io/projected/f597fc0f-7407-4f05-916c-70f7a3f145ec-kube-api-access-64q96\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.716129 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-scripts\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.717989 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.718429 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-config-data\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:17 crc kubenswrapper[5012]: I0219 05:46:17.750978 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64q96\" (UniqueName: \"kubernetes.io/projected/f597fc0f-7407-4f05-916c-70f7a3f145ec-kube-api-access-64q96\") pod \"nova-cell1-cell-mapping-4t5r4\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:18 crc kubenswrapper[5012]: I0219 05:46:18.043824 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:18 crc kubenswrapper[5012]: I0219 05:46:18.226437 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9647feae-5291-41e1-9bb4-631f661552b9","Type":"ContainerStarted","Data":"2d58e4435a956762307e0d481120cafc3d3b0586b5c958e51b334e3a95d2d854"} Feb 19 05:46:18 crc kubenswrapper[5012]: I0219 05:46:18.226714 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9647feae-5291-41e1-9bb4-631f661552b9","Type":"ContainerStarted","Data":"56ca39e0fad37f61034c8dc94c678a7eaebfbe61eae9b4579509b772bbd7ca90"} Feb 19 05:46:18 crc kubenswrapper[5012]: I0219 05:46:18.536986 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4t5r4"] Feb 19 05:46:18 crc kubenswrapper[5012]: W0219 05:46:18.541129 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf597fc0f_7407_4f05_916c_70f7a3f145ec.slice/crio-e184420ddf676dc7da68164f21f80644f8c58c192add937a650994fd6d15c6b3 WatchSource:0}: Error finding container e184420ddf676dc7da68164f21f80644f8c58c192add937a650994fd6d15c6b3: Status 404 returned error can't find the container with id e184420ddf676dc7da68164f21f80644f8c58c192add937a650994fd6d15c6b3 Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.082538 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.175400 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647496cc8f-4z5vx"] Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.175954 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" podUID="c1589f54-6631-4004-b2a9-e253b43b0644" containerName="dnsmasq-dns" containerID="cri-o://401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb" gracePeriod=10 Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.252807 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4t5r4" event={"ID":"f597fc0f-7407-4f05-916c-70f7a3f145ec","Type":"ContainerStarted","Data":"9d9ddb4f57f745aaa08f8b6e7a9a59d578aaf776b154c4fb9be135f3d48b048d"} Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.252847 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4t5r4" event={"ID":"f597fc0f-7407-4f05-916c-70f7a3f145ec","Type":"ContainerStarted","Data":"e184420ddf676dc7da68164f21f80644f8c58c192add937a650994fd6d15c6b3"} Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.280752 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-4t5r4" podStartSLOduration=2.280723738 podStartE2EDuration="2.280723738s" podCreationTimestamp="2026-02-19 05:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:46:19.277854688 +0000 UTC m=+1275.311177257" watchObservedRunningTime="2026-02-19 05:46:19.280723738 +0000 UTC m=+1275.314046307" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.653730 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.763050 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-svc\") pod \"c1589f54-6631-4004-b2a9-e253b43b0644\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.763165 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-sb\") pod \"c1589f54-6631-4004-b2a9-e253b43b0644\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.763280 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-swift-storage-0\") pod \"c1589f54-6631-4004-b2a9-e253b43b0644\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.763390 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfxlj\" (UniqueName: \"kubernetes.io/projected/c1589f54-6631-4004-b2a9-e253b43b0644-kube-api-access-mfxlj\") pod \"c1589f54-6631-4004-b2a9-e253b43b0644\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.763458 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-config\") pod \"c1589f54-6631-4004-b2a9-e253b43b0644\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.763495 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-nb\") pod \"c1589f54-6631-4004-b2a9-e253b43b0644\" (UID: \"c1589f54-6631-4004-b2a9-e253b43b0644\") " Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.787535 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1589f54-6631-4004-b2a9-e253b43b0644-kube-api-access-mfxlj" (OuterVolumeSpecName: "kube-api-access-mfxlj") pod "c1589f54-6631-4004-b2a9-e253b43b0644" (UID: "c1589f54-6631-4004-b2a9-e253b43b0644"). InnerVolumeSpecName "kube-api-access-mfxlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.823252 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-config" (OuterVolumeSpecName: "config") pod "c1589f54-6631-4004-b2a9-e253b43b0644" (UID: "c1589f54-6631-4004-b2a9-e253b43b0644"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.829921 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c1589f54-6631-4004-b2a9-e253b43b0644" (UID: "c1589f54-6631-4004-b2a9-e253b43b0644"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.834887 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c1589f54-6631-4004-b2a9-e253b43b0644" (UID: "c1589f54-6631-4004-b2a9-e253b43b0644"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.839692 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c1589f54-6631-4004-b2a9-e253b43b0644" (UID: "c1589f54-6631-4004-b2a9-e253b43b0644"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.841341 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c1589f54-6631-4004-b2a9-e253b43b0644" (UID: "c1589f54-6631-4004-b2a9-e253b43b0644"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.867150 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.867340 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.867428 5012 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.867533 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfxlj\" (UniqueName: \"kubernetes.io/projected/c1589f54-6631-4004-b2a9-e253b43b0644-kube-api-access-mfxlj\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.867612 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:19 crc kubenswrapper[5012]: I0219 05:46:19.867683 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1589f54-6631-4004-b2a9-e253b43b0644-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.272387 5012 generic.go:334] "Generic (PLEG): container finished" podID="c1589f54-6631-4004-b2a9-e253b43b0644" containerID="401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb" exitCode=0 Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.272586 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" event={"ID":"c1589f54-6631-4004-b2a9-e253b43b0644","Type":"ContainerDied","Data":"401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb"} Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.272938 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" event={"ID":"c1589f54-6631-4004-b2a9-e253b43b0644","Type":"ContainerDied","Data":"8a23cad7dbe6ef631f80ea11b62d7b988e6b72ef836fd0ba728b4bc06cb53bf4"} Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.272967 5012 scope.go:117] "RemoveContainer" containerID="401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb" Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.272696 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647496cc8f-4z5vx" Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.282792 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9647feae-5291-41e1-9bb4-631f661552b9","Type":"ContainerStarted","Data":"0c23692d5ed1b1882f1b396df4e1f7cb0268dc37efd4bd4b5d74511691a797bc"} Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.326930 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.854640061 podStartE2EDuration="5.32690087s" podCreationTimestamp="2026-02-19 05:46:15 +0000 UTC" firstStartedPulling="2026-02-19 05:46:16.451553413 +0000 UTC m=+1272.484875982" lastFinishedPulling="2026-02-19 05:46:18.923814212 +0000 UTC m=+1274.957136791" observedRunningTime="2026-02-19 05:46:20.306851262 +0000 UTC m=+1276.340173841" watchObservedRunningTime="2026-02-19 05:46:20.32690087 +0000 UTC m=+1276.360223479" Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.327406 5012 scope.go:117] "RemoveContainer" containerID="ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8" Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.360934 5012 scope.go:117] "RemoveContainer" containerID="401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb" Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.362852 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647496cc8f-4z5vx"] Feb 19 05:46:20 crc kubenswrapper[5012]: E0219 05:46:20.368454 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb\": container with ID starting with 401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb not found: ID does not exist" containerID="401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb" Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.368556 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb"} err="failed to get container status \"401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb\": rpc error: code = NotFound desc = could not find container \"401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb\": container with ID starting with 401f80ed7d8955eddd8bed14b81728f35265e9be51b261c3b8a50801747a1ccb not found: ID does not exist" Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.368641 5012 scope.go:117] "RemoveContainer" containerID="ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8" Feb 19 05:46:20 crc kubenswrapper[5012]: E0219 05:46:20.370440 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8\": container with ID starting with ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8 not found: ID does not exist" containerID="ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8" Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.370620 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8"} err="failed to get container status \"ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8\": rpc error: code = NotFound desc = could not find container \"ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8\": container with ID starting with ed7acdf6ba81b3ae6002a359d0c6c67d7469fe54fff51deb6cc5f21c6db4d4d8 not found: ID does not exist" Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.372142 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-647496cc8f-4z5vx"] Feb 19 05:46:20 crc kubenswrapper[5012]: I0219 05:46:20.720452 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1589f54-6631-4004-b2a9-e253b43b0644" path="/var/lib/kubelet/pods/c1589f54-6631-4004-b2a9-e253b43b0644/volumes" Feb 19 05:46:21 crc kubenswrapper[5012]: I0219 05:46:21.297438 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 05:46:23 crc kubenswrapper[5012]: I0219 05:46:23.557858 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 05:46:23 crc kubenswrapper[5012]: I0219 05:46:23.559743 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 05:46:24 crc kubenswrapper[5012]: I0219 05:46:24.335414 5012 generic.go:334] "Generic (PLEG): container finished" podID="f597fc0f-7407-4f05-916c-70f7a3f145ec" containerID="9d9ddb4f57f745aaa08f8b6e7a9a59d578aaf776b154c4fb9be135f3d48b048d" exitCode=0 Feb 19 05:46:24 crc kubenswrapper[5012]: I0219 05:46:24.336894 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4t5r4" event={"ID":"f597fc0f-7407-4f05-916c-70f7a3f145ec","Type":"ContainerDied","Data":"9d9ddb4f57f745aaa08f8b6e7a9a59d578aaf776b154c4fb9be135f3d48b048d"} Feb 19 05:46:24 crc kubenswrapper[5012]: I0219 05:46:24.563600 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 05:46:24 crc kubenswrapper[5012]: I0219 05:46:24.567552 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 05:46:25 crc kubenswrapper[5012]: I0219 05:46:25.837902 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.034087 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64q96\" (UniqueName: \"kubernetes.io/projected/f597fc0f-7407-4f05-916c-70f7a3f145ec-kube-api-access-64q96\") pod \"f597fc0f-7407-4f05-916c-70f7a3f145ec\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.034214 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-combined-ca-bundle\") pod \"f597fc0f-7407-4f05-916c-70f7a3f145ec\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.034265 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-scripts\") pod \"f597fc0f-7407-4f05-916c-70f7a3f145ec\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.034497 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-config-data\") pod \"f597fc0f-7407-4f05-916c-70f7a3f145ec\" (UID: \"f597fc0f-7407-4f05-916c-70f7a3f145ec\") " Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.041586 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f597fc0f-7407-4f05-916c-70f7a3f145ec-kube-api-access-64q96" (OuterVolumeSpecName: "kube-api-access-64q96") pod "f597fc0f-7407-4f05-916c-70f7a3f145ec" (UID: "f597fc0f-7407-4f05-916c-70f7a3f145ec"). InnerVolumeSpecName "kube-api-access-64q96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.041914 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-scripts" (OuterVolumeSpecName: "scripts") pod "f597fc0f-7407-4f05-916c-70f7a3f145ec" (UID: "f597fc0f-7407-4f05-916c-70f7a3f145ec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.068491 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f597fc0f-7407-4f05-916c-70f7a3f145ec" (UID: "f597fc0f-7407-4f05-916c-70f7a3f145ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.071221 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-config-data" (OuterVolumeSpecName: "config-data") pod "f597fc0f-7407-4f05-916c-70f7a3f145ec" (UID: "f597fc0f-7407-4f05-916c-70f7a3f145ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.136991 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64q96\" (UniqueName: \"kubernetes.io/projected/f597fc0f-7407-4f05-916c-70f7a3f145ec-kube-api-access-64q96\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.137053 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.137070 5012 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.137082 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f597fc0f-7407-4f05-916c-70f7a3f145ec-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.359180 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4t5r4" event={"ID":"f597fc0f-7407-4f05-916c-70f7a3f145ec","Type":"ContainerDied","Data":"e184420ddf676dc7da68164f21f80644f8c58c192add937a650994fd6d15c6b3"} Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.359401 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e184420ddf676dc7da68164f21f80644f8c58c192add937a650994fd6d15c6b3" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.359258 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4t5r4" Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.569052 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.569530 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerName="nova-api-log" containerID="cri-o://898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705" gracePeriod=30 Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.569609 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerName="nova-api-api" containerID="cri-o://4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21" gracePeriod=30 Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.578415 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.578621 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="96352ff3-accb-4fd1-8fa4-eec10f340eaf" containerName="nova-scheduler-scheduler" containerID="cri-o://2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c" gracePeriod=30 Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.674188 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.674794 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerName="nova-metadata-log" containerID="cri-o://6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b" gracePeriod=30 Feb 19 05:46:26 crc kubenswrapper[5012]: I0219 05:46:26.674865 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerName="nova-metadata-metadata" containerID="cri-o://ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3" gracePeriod=30 Feb 19 05:46:27 crc kubenswrapper[5012]: I0219 05:46:27.371409 5012 generic.go:334] "Generic (PLEG): container finished" podID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerID="6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b" exitCode=143 Feb 19 05:46:27 crc kubenswrapper[5012]: I0219 05:46:27.371468 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b","Type":"ContainerDied","Data":"6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b"} Feb 19 05:46:27 crc kubenswrapper[5012]: I0219 05:46:27.374679 5012 generic.go:334] "Generic (PLEG): container finished" podID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerID="898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705" exitCode=143 Feb 19 05:46:27 crc kubenswrapper[5012]: I0219 05:46:27.374712 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc58982d-c141-4de8-bf5b-1669db2facb1","Type":"ContainerDied","Data":"898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705"} Feb 19 05:46:27 crc kubenswrapper[5012]: I0219 05:46:27.984027 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.077679 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-public-tls-certs\") pod \"bc58982d-c141-4de8-bf5b-1669db2facb1\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.077734 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-config-data\") pod \"bc58982d-c141-4de8-bf5b-1669db2facb1\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.077782 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djdp5\" (UniqueName: \"kubernetes.io/projected/bc58982d-c141-4de8-bf5b-1669db2facb1-kube-api-access-djdp5\") pod \"bc58982d-c141-4de8-bf5b-1669db2facb1\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.077830 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-combined-ca-bundle\") pod \"bc58982d-c141-4de8-bf5b-1669db2facb1\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.077881 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc58982d-c141-4de8-bf5b-1669db2facb1-logs\") pod \"bc58982d-c141-4de8-bf5b-1669db2facb1\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.077961 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-internal-tls-certs\") pod \"bc58982d-c141-4de8-bf5b-1669db2facb1\" (UID: \"bc58982d-c141-4de8-bf5b-1669db2facb1\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.084053 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc58982d-c141-4de8-bf5b-1669db2facb1-kube-api-access-djdp5" (OuterVolumeSpecName: "kube-api-access-djdp5") pod "bc58982d-c141-4de8-bf5b-1669db2facb1" (UID: "bc58982d-c141-4de8-bf5b-1669db2facb1"). InnerVolumeSpecName "kube-api-access-djdp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.086198 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc58982d-c141-4de8-bf5b-1669db2facb1-logs" (OuterVolumeSpecName: "logs") pod "bc58982d-c141-4de8-bf5b-1669db2facb1" (UID: "bc58982d-c141-4de8-bf5b-1669db2facb1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.104460 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-config-data" (OuterVolumeSpecName: "config-data") pod "bc58982d-c141-4de8-bf5b-1669db2facb1" (UID: "bc58982d-c141-4de8-bf5b-1669db2facb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.126924 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc58982d-c141-4de8-bf5b-1669db2facb1" (UID: "bc58982d-c141-4de8-bf5b-1669db2facb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.134923 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bc58982d-c141-4de8-bf5b-1669db2facb1" (UID: "bc58982d-c141-4de8-bf5b-1669db2facb1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.137665 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bc58982d-c141-4de8-bf5b-1669db2facb1" (UID: "bc58982d-c141-4de8-bf5b-1669db2facb1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.166356 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.184234 5012 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.184259 5012 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.184269 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.184279 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djdp5\" (UniqueName: \"kubernetes.io/projected/bc58982d-c141-4de8-bf5b-1669db2facb1-kube-api-access-djdp5\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.184322 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc58982d-c141-4de8-bf5b-1669db2facb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.184330 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc58982d-c141-4de8-bf5b-1669db2facb1-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.286049 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9rh7\" (UniqueName: \"kubernetes.io/projected/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-kube-api-access-q9rh7\") pod \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.286145 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-nova-metadata-tls-certs\") pod \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.286250 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-combined-ca-bundle\") pod \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.286335 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-config-data\") pod \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.286371 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-logs\") pod \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\" (UID: \"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b\") " Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.287001 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-logs" (OuterVolumeSpecName: "logs") pod "5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" (UID: "5dbca55d-fe7e-4a74-a25c-8c495eb29e3b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.292449 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-kube-api-access-q9rh7" (OuterVolumeSpecName: "kube-api-access-q9rh7") pod "5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" (UID: "5dbca55d-fe7e-4a74-a25c-8c495eb29e3b"). InnerVolumeSpecName "kube-api-access-q9rh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.309236 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" (UID: "5dbca55d-fe7e-4a74-a25c-8c495eb29e3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.317272 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-config-data" (OuterVolumeSpecName: "config-data") pod "5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" (UID: "5dbca55d-fe7e-4a74-a25c-8c495eb29e3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.347901 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" (UID: "5dbca55d-fe7e-4a74-a25c-8c495eb29e3b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.384380 5012 generic.go:334] "Generic (PLEG): container finished" podID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerID="4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21" exitCode=0 Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.385201 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc58982d-c141-4de8-bf5b-1669db2facb1","Type":"ContainerDied","Data":"4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21"} Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.385398 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc58982d-c141-4de8-bf5b-1669db2facb1","Type":"ContainerDied","Data":"04fe4ed95fee83c7e2c8336e973329811054a21e11bbed885171027e8406c6c8"} Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.385465 5012 scope.go:117] "RemoveContainer" containerID="4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.385541 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.387477 5012 generic.go:334] "Generic (PLEG): container finished" podID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerID="ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3" exitCode=0 Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.387521 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b","Type":"ContainerDied","Data":"ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3"} Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.387549 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5dbca55d-fe7e-4a74-a25c-8c495eb29e3b","Type":"ContainerDied","Data":"d5151fe8a2179cf3ec35bc35e025b6f051659ce0400cbb15c53f153c34909628"} Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.387608 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.388723 5012 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.388741 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.388750 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.388760 5012 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-logs\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.388769 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9rh7\" (UniqueName: \"kubernetes.io/projected/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b-kube-api-access-q9rh7\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.418980 5012 scope.go:117] "RemoveContainer" containerID="898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.453528 5012 scope.go:117] "RemoveContainer" containerID="4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21" Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.454469 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21\": container with ID starting with 4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21 not found: ID does not exist" containerID="4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.457340 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21"} err="failed to get container status \"4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21\": rpc error: code = NotFound desc = could not find container \"4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21\": container with ID starting with 4cb751b79246c2eeaf67cf48b0a6882afcdcce5f31f1059d953cdeaf7e368e21 not found: ID does not exist" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.457453 5012 scope.go:117] "RemoveContainer" containerID="898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705" Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.457963 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705\": container with ID starting with 898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705 not found: ID does not exist" containerID="898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.458039 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705"} err="failed to get container status \"898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705\": rpc error: code = NotFound desc = could not find container \"898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705\": container with ID starting with 898470d1f801708f2ef32678f55e4f7d7ac694f27a7877bf36dbe499664ab705 not found: ID does not exist" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.458123 5012 scope.go:117] "RemoveContainer" containerID="ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.466091 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.492218 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.506600 5012 scope.go:117] "RemoveContainer" containerID="6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.513409 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.522746 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.524051 5012 scope.go:117] "RemoveContainer" containerID="ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3" Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.524997 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3\": container with ID starting with ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3 not found: ID does not exist" containerID="ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.525049 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3"} err="failed to get container status \"ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3\": rpc error: code = NotFound desc = could not find container \"ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3\": container with ID starting with ac2964c65e06cfb14ab68d7460bba473fa392e3e6a86e2f66189e1f5fe6e62f3 not found: ID does not exist" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.525077 5012 scope.go:117] "RemoveContainer" containerID="6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b" Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.525421 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b\": container with ID starting with 6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b not found: ID does not exist" containerID="6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.525441 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b"} err="failed to get container status \"6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b\": rpc error: code = NotFound desc = could not find container \"6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b\": container with ID starting with 6144d66967d37506ebb5d4e9e84f66658c4ed388f4bec9072d3566f1959b577b not found: ID does not exist" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.531707 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.532182 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1589f54-6631-4004-b2a9-e253b43b0644" containerName="dnsmasq-dns" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532198 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1589f54-6631-4004-b2a9-e253b43b0644" containerName="dnsmasq-dns" Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.532209 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1589f54-6631-4004-b2a9-e253b43b0644" containerName="init" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532215 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1589f54-6631-4004-b2a9-e253b43b0644" containerName="init" Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.532228 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerName="nova-metadata-log" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532234 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerName="nova-metadata-log" Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.532244 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerName="nova-api-api" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532250 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerName="nova-api-api" Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.532261 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerName="nova-metadata-metadata" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532267 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerName="nova-metadata-metadata" Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.532280 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f597fc0f-7407-4f05-916c-70f7a3f145ec" containerName="nova-manage" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532285 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f597fc0f-7407-4f05-916c-70f7a3f145ec" containerName="nova-manage" Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.532331 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerName="nova-api-log" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532337 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerName="nova-api-log" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532507 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1589f54-6631-4004-b2a9-e253b43b0644" containerName="dnsmasq-dns" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532520 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerName="nova-api-api" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532533 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerName="nova-metadata-log" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532546 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" containerName="nova-metadata-metadata" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532560 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f597fc0f-7407-4f05-916c-70f7a3f145ec" containerName="nova-manage" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.532572 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" containerName="nova-api-log" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.533633 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.535841 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.535882 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.536395 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.555632 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.557287 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.560889 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.561071 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.573857 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.582894 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:46:28 crc kubenswrapper[5012]: E0219 05:46:28.601058 5012 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc58982d_c141_4de8_bf5b_1669db2facb1.slice/crio-04fe4ed95fee83c7e2c8336e973329811054a21e11bbed885171027e8406c6c8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc58982d_c141_4de8_bf5b_1669db2facb1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dbca55d_fe7e_4a74_a25c_8c495eb29e3b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dbca55d_fe7e_4a74_a25c_8c495eb29e3b.slice/crio-d5151fe8a2179cf3ec35bc35e025b6f051659ce0400cbb15c53f153c34909628\": RecentStats: unable to find data in memory cache]" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.697861 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.697951 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/396b18f9-9859-4b42-aca1-c29c3724c86c-config-data\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.698026 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/396b18f9-9859-4b42-aca1-c29c3724c86c-logs\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.698108 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/396b18f9-9859-4b42-aca1-c29c3724c86c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.698150 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxxbt\" (UniqueName: \"kubernetes.io/projected/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-kube-api-access-jxxbt\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.698192 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/396b18f9-9859-4b42-aca1-c29c3724c86c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.698254 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-public-tls-certs\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.698286 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xrjn\" (UniqueName: \"kubernetes.io/projected/396b18f9-9859-4b42-aca1-c29c3724c86c-kube-api-access-7xrjn\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.698386 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-config-data\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.698417 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-logs\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.698459 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.713364 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dbca55d-fe7e-4a74-a25c-8c495eb29e3b" path="/var/lib/kubelet/pods/5dbca55d-fe7e-4a74-a25c-8c495eb29e3b/volumes" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.714226 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc58982d-c141-4de8-bf5b-1669db2facb1" path="/var/lib/kubelet/pods/bc58982d-c141-4de8-bf5b-1669db2facb1/volumes" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.799730 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800073 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/396b18f9-9859-4b42-aca1-c29c3724c86c-config-data\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800144 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/396b18f9-9859-4b42-aca1-c29c3724c86c-logs\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800234 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/396b18f9-9859-4b42-aca1-c29c3724c86c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800283 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxxbt\" (UniqueName: \"kubernetes.io/projected/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-kube-api-access-jxxbt\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800334 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/396b18f9-9859-4b42-aca1-c29c3724c86c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800379 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-public-tls-certs\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800400 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xrjn\" (UniqueName: \"kubernetes.io/projected/396b18f9-9859-4b42-aca1-c29c3724c86c-kube-api-access-7xrjn\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800474 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-config-data\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800496 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-logs\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800526 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.800902 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/396b18f9-9859-4b42-aca1-c29c3724c86c-logs\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.801932 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-logs\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.803677 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/396b18f9-9859-4b42-aca1-c29c3724c86c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.806289 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/396b18f9-9859-4b42-aca1-c29c3724c86c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.806607 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/396b18f9-9859-4b42-aca1-c29c3724c86c-config-data\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.807029 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.812489 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-config-data\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.813990 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.818478 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxxbt\" (UniqueName: \"kubernetes.io/projected/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-kube-api-access-jxxbt\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.818579 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1a529b0-65f7-4680-a4fd-4dacebc1ab83-public-tls-certs\") pod \"nova-api-0\" (UID: \"c1a529b0-65f7-4680-a4fd-4dacebc1ab83\") " pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.823595 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xrjn\" (UniqueName: \"kubernetes.io/projected/396b18f9-9859-4b42-aca1-c29c3724c86c-kube-api-access-7xrjn\") pod \"nova-metadata-0\" (UID: \"396b18f9-9859-4b42-aca1-c29c3724c86c\") " pod="openstack/nova-metadata-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.855991 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 05:46:28 crc kubenswrapper[5012]: I0219 05:46:28.876448 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.209461 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.310839 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj24n\" (UniqueName: \"kubernetes.io/projected/96352ff3-accb-4fd1-8fa4-eec10f340eaf-kube-api-access-gj24n\") pod \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.310907 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-combined-ca-bundle\") pod \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.311145 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-config-data\") pod \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\" (UID: \"96352ff3-accb-4fd1-8fa4-eec10f340eaf\") " Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.316520 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96352ff3-accb-4fd1-8fa4-eec10f340eaf-kube-api-access-gj24n" (OuterVolumeSpecName: "kube-api-access-gj24n") pod "96352ff3-accb-4fd1-8fa4-eec10f340eaf" (UID: "96352ff3-accb-4fd1-8fa4-eec10f340eaf"). InnerVolumeSpecName "kube-api-access-gj24n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.346462 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.354645 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96352ff3-accb-4fd1-8fa4-eec10f340eaf" (UID: "96352ff3-accb-4fd1-8fa4-eec10f340eaf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.364902 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-config-data" (OuterVolumeSpecName: "config-data") pod "96352ff3-accb-4fd1-8fa4-eec10f340eaf" (UID: "96352ff3-accb-4fd1-8fa4-eec10f340eaf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.402348 5012 generic.go:334] "Generic (PLEG): container finished" podID="96352ff3-accb-4fd1-8fa4-eec10f340eaf" containerID="2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c" exitCode=0 Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.402521 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.403930 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96352ff3-accb-4fd1-8fa4-eec10f340eaf","Type":"ContainerDied","Data":"2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c"} Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.403982 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96352ff3-accb-4fd1-8fa4-eec10f340eaf","Type":"ContainerDied","Data":"0efe32e1979dd31ba39916aaeb5dec48666e72ade534d761ecc6b79a3666cbef"} Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.404000 5012 scope.go:117] "RemoveContainer" containerID="2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.409983 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c1a529b0-65f7-4680-a4fd-4dacebc1ab83","Type":"ContainerStarted","Data":"048c66c098b7f9e8eaec85a26f1504617a2fd828fc11f95817a7669416de03c9"} Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.413466 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.413502 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj24n\" (UniqueName: \"kubernetes.io/projected/96352ff3-accb-4fd1-8fa4-eec10f340eaf-kube-api-access-gj24n\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.413516 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96352ff3-accb-4fd1-8fa4-eec10f340eaf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.430430 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.454456 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.461423 5012 scope.go:117] "RemoveContainer" containerID="2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.472093 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:46:29 crc kubenswrapper[5012]: E0219 05:46:29.473061 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c\": container with ID starting with 2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c not found: ID does not exist" containerID="2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.473102 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c"} err="failed to get container status \"2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c\": rpc error: code = NotFound desc = could not find container \"2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c\": container with ID starting with 2160a93a2267603d774a9ddc214804cae249054936777bb45916eee28d693a6c not found: ID does not exist" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.495494 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:46:29 crc kubenswrapper[5012]: E0219 05:46:29.496167 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96352ff3-accb-4fd1-8fa4-eec10f340eaf" containerName="nova-scheduler-scheduler" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.496198 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="96352ff3-accb-4fd1-8fa4-eec10f340eaf" containerName="nova-scheduler-scheduler" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.496598 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="96352ff3-accb-4fd1-8fa4-eec10f340eaf" containerName="nova-scheduler-scheduler" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.497650 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.497780 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.500658 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.618204 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0-config-data\") pod \"nova-scheduler-0\" (UID: \"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0\") " pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.618261 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzxb2\" (UniqueName: \"kubernetes.io/projected/6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0-kube-api-access-mzxb2\") pod \"nova-scheduler-0\" (UID: \"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0\") " pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.618370 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0\") " pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.720589 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0-config-data\") pod \"nova-scheduler-0\" (UID: \"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0\") " pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.720653 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzxb2\" (UniqueName: \"kubernetes.io/projected/6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0-kube-api-access-mzxb2\") pod \"nova-scheduler-0\" (UID: \"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0\") " pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.720756 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0\") " pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.724180 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0\") " pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.724225 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0-config-data\") pod \"nova-scheduler-0\" (UID: \"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0\") " pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.736169 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzxb2\" (UniqueName: \"kubernetes.io/projected/6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0-kube-api-access-mzxb2\") pod \"nova-scheduler-0\" (UID: \"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0\") " pod="openstack/nova-scheduler-0" Feb 19 05:46:29 crc kubenswrapper[5012]: I0219 05:46:29.819898 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 05:46:30 crc kubenswrapper[5012]: I0219 05:46:30.281647 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 05:46:30 crc kubenswrapper[5012]: W0219 05:46:30.287458 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cfb0ed7_fe80_4d03_9ecb_31587c57bfd0.slice/crio-203b529037ccc9aaf68ea8bdc0675c8a399762c363d09de08c57c745a8caf5b4 WatchSource:0}: Error finding container 203b529037ccc9aaf68ea8bdc0675c8a399762c363d09de08c57c745a8caf5b4: Status 404 returned error can't find the container with id 203b529037ccc9aaf68ea8bdc0675c8a399762c363d09de08c57c745a8caf5b4 Feb 19 05:46:30 crc kubenswrapper[5012]: I0219 05:46:30.422897 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0","Type":"ContainerStarted","Data":"203b529037ccc9aaf68ea8bdc0675c8a399762c363d09de08c57c745a8caf5b4"} Feb 19 05:46:30 crc kubenswrapper[5012]: I0219 05:46:30.425077 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"396b18f9-9859-4b42-aca1-c29c3724c86c","Type":"ContainerStarted","Data":"3520479af12ca532f031d29fd0e70688ec7f4a71074814c2a5e54db9b37ba120"} Feb 19 05:46:30 crc kubenswrapper[5012]: I0219 05:46:30.425100 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"396b18f9-9859-4b42-aca1-c29c3724c86c","Type":"ContainerStarted","Data":"c7d0760ee03878273f710fd278cc50691fe286371da17a9f060d59bf0f4c14f6"} Feb 19 05:46:30 crc kubenswrapper[5012]: I0219 05:46:30.425111 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"396b18f9-9859-4b42-aca1-c29c3724c86c","Type":"ContainerStarted","Data":"92a0dc095e141f2fdc5820ca1127aeb4996f8e385984fc4a71f51fff6af9276b"} Feb 19 05:46:30 crc kubenswrapper[5012]: I0219 05:46:30.427350 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c1a529b0-65f7-4680-a4fd-4dacebc1ab83","Type":"ContainerStarted","Data":"85ec488a97bf35b5601f9415104f939a31d0ee0fa34c9f6050e5220eb8811b30"} Feb 19 05:46:30 crc kubenswrapper[5012]: I0219 05:46:30.427371 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c1a529b0-65f7-4680-a4fd-4dacebc1ab83","Type":"ContainerStarted","Data":"7184e48a5fb705cafe26540d60e9f3e9e43e603bf3267a9e7be7910ee208ba84"} Feb 19 05:46:30 crc kubenswrapper[5012]: I0219 05:46:30.458807 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.458791138 podStartE2EDuration="2.458791138s" podCreationTimestamp="2026-02-19 05:46:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:46:30.452374422 +0000 UTC m=+1286.485697011" watchObservedRunningTime="2026-02-19 05:46:30.458791138 +0000 UTC m=+1286.492113707" Feb 19 05:46:30 crc kubenswrapper[5012]: I0219 05:46:30.488674 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.488654635 podStartE2EDuration="2.488654635s" podCreationTimestamp="2026-02-19 05:46:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:46:30.480928917 +0000 UTC m=+1286.514251486" watchObservedRunningTime="2026-02-19 05:46:30.488654635 +0000 UTC m=+1286.521977224" Feb 19 05:46:30 crc kubenswrapper[5012]: I0219 05:46:30.715399 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96352ff3-accb-4fd1-8fa4-eec10f340eaf" path="/var/lib/kubelet/pods/96352ff3-accb-4fd1-8fa4-eec10f340eaf/volumes" Feb 19 05:46:31 crc kubenswrapper[5012]: I0219 05:46:31.467121 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0","Type":"ContainerStarted","Data":"e600cee030fb526a93db53704118d775b59807400b433d63578aec2beb4b9ff5"} Feb 19 05:46:31 crc kubenswrapper[5012]: I0219 05:46:31.496068 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.496039102 podStartE2EDuration="2.496039102s" podCreationTimestamp="2026-02-19 05:46:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:46:31.493960972 +0000 UTC m=+1287.527283611" watchObservedRunningTime="2026-02-19 05:46:31.496039102 +0000 UTC m=+1287.529361711" Feb 19 05:46:33 crc kubenswrapper[5012]: I0219 05:46:33.877168 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 05:46:33 crc kubenswrapper[5012]: I0219 05:46:33.878478 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 05:46:34 crc kubenswrapper[5012]: I0219 05:46:34.820262 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 05:46:38 crc kubenswrapper[5012]: I0219 05:46:38.856242 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 05:46:38 crc kubenswrapper[5012]: I0219 05:46:38.858782 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 05:46:38 crc kubenswrapper[5012]: I0219 05:46:38.877555 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 05:46:38 crc kubenswrapper[5012]: I0219 05:46:38.877612 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 05:46:39 crc kubenswrapper[5012]: I0219 05:46:39.821225 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 05:46:39 crc kubenswrapper[5012]: I0219 05:46:39.869446 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 05:46:39 crc kubenswrapper[5012]: I0219 05:46:39.875262 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c1a529b0-65f7-4680-a4fd-4dacebc1ab83" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 05:46:39 crc kubenswrapper[5012]: I0219 05:46:39.875843 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c1a529b0-65f7-4680-a4fd-4dacebc1ab83" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 05:46:39 crc kubenswrapper[5012]: I0219 05:46:39.897512 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="396b18f9-9859-4b42-aca1-c29c3724c86c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 05:46:39 crc kubenswrapper[5012]: I0219 05:46:39.897909 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="396b18f9-9859-4b42-aca1-c29c3724c86c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 05:46:40 crc kubenswrapper[5012]: I0219 05:46:40.651284 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 05:46:44 crc kubenswrapper[5012]: I0219 05:46:44.430893 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:46:44 crc kubenswrapper[5012]: I0219 05:46:44.431003 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:46:44 crc kubenswrapper[5012]: I0219 05:46:44.431071 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:46:44 crc kubenswrapper[5012]: I0219 05:46:44.432152 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6721017012e745bfd497807b3e0766cbf7c779446215cbbe94491f729f86c6ac"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 05:46:44 crc kubenswrapper[5012]: I0219 05:46:44.432257 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://6721017012e745bfd497807b3e0766cbf7c779446215cbbe94491f729f86c6ac" gracePeriod=600 Feb 19 05:46:44 crc kubenswrapper[5012]: I0219 05:46:44.661599 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="6721017012e745bfd497807b3e0766cbf7c779446215cbbe94491f729f86c6ac" exitCode=0 Feb 19 05:46:44 crc kubenswrapper[5012]: I0219 05:46:44.661712 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"6721017012e745bfd497807b3e0766cbf7c779446215cbbe94491f729f86c6ac"} Feb 19 05:46:44 crc kubenswrapper[5012]: I0219 05:46:44.662112 5012 scope.go:117] "RemoveContainer" containerID="0209690f43a6b6283a91e933f5b897e5259f5fced0261c8b5238e804ce206915" Feb 19 05:46:45 crc kubenswrapper[5012]: I0219 05:46:45.678998 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42"} Feb 19 05:46:45 crc kubenswrapper[5012]: I0219 05:46:45.964675 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 05:46:48 crc kubenswrapper[5012]: I0219 05:46:48.879170 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 05:46:48 crc kubenswrapper[5012]: I0219 05:46:48.879971 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 05:46:49 crc kubenswrapper[5012]: I0219 05:46:49.019898 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 05:46:49 crc kubenswrapper[5012]: I0219 05:46:49.024475 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 05:46:49 crc kubenswrapper[5012]: I0219 05:46:49.027525 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 05:46:49 crc kubenswrapper[5012]: I0219 05:46:49.030703 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 05:46:49 crc kubenswrapper[5012]: I0219 05:46:49.032905 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 05:46:49 crc kubenswrapper[5012]: I0219 05:46:49.736401 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 05:46:49 crc kubenswrapper[5012]: I0219 05:46:49.741611 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 05:46:49 crc kubenswrapper[5012]: I0219 05:46:49.750861 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 05:46:57 crc kubenswrapper[5012]: I0219 05:46:57.362111 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 05:46:58 crc kubenswrapper[5012]: I0219 05:46:58.345319 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 05:47:00 crc kubenswrapper[5012]: I0219 05:47:00.793225 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="b0095712-262e-4562-afac-0f2f4372224d" containerName="rabbitmq" containerID="cri-o://dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32" gracePeriod=604797 Feb 19 05:47:00 crc kubenswrapper[5012]: I0219 05:47:00.988759 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b0095712-262e-4562-afac-0f2f4372224d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Feb 19 05:47:01 crc kubenswrapper[5012]: I0219 05:47:01.881964 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="a13d3004-2045-4daf-a925-7eccf541b1b4" containerName="rabbitmq" containerID="cri-o://0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7" gracePeriod=604797 Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.506451 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.559260 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0095712-262e-4562-afac-0f2f4372224d-pod-info\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.559784 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-plugins-conf\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.559830 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-tls\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.559969 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-config-data\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.560109 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-plugins\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.560137 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8phq\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-kube-api-access-b8phq\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.560619 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0095712-262e-4562-afac-0f2f4372224d-erlang-cookie-secret\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.560697 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-erlang-cookie\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.560727 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-confd\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.560765 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.560794 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-server-conf\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.560839 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"b0095712-262e-4562-afac-0f2f4372224d\" (UID: \"b0095712-262e-4562-afac-0f2f4372224d\") " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.562293 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.562867 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.574655 5012 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.574697 5012 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.574715 5012 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.601055 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0095712-262e-4562-afac-0f2f4372224d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.612237 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b0095712-262e-4562-afac-0f2f4372224d-pod-info" (OuterVolumeSpecName: "pod-info") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.617610 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.628614 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-kube-api-access-b8phq" (OuterVolumeSpecName: "kube-api-access-b8phq") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "kube-api-access-b8phq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.630997 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-config-data" (OuterVolumeSpecName: "config-data") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.633971 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.655588 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-server-conf" (OuterVolumeSpecName: "server-conf") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.677124 5012 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0095712-262e-4562-afac-0f2f4372224d-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.677158 5012 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.677169 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.677179 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8phq\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-kube-api-access-b8phq\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.677189 5012 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0095712-262e-4562-afac-0f2f4372224d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.677197 5012 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0095712-262e-4562-afac-0f2f4372224d-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.677221 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.711189 5012 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.732046 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b0095712-262e-4562-afac-0f2f4372224d" (UID: "b0095712-262e-4562-afac-0f2f4372224d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.779892 5012 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0095712-262e-4562-afac-0f2f4372224d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.779924 5012 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.898252 5012 generic.go:334] "Generic (PLEG): container finished" podID="b0095712-262e-4562-afac-0f2f4372224d" containerID="dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32" exitCode=0 Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.898293 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0095712-262e-4562-afac-0f2f4372224d","Type":"ContainerDied","Data":"dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32"} Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.898352 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0095712-262e-4562-afac-0f2f4372224d","Type":"ContainerDied","Data":"a9ce4884d01424dd045dfa7d8118a6965b35bc2fd9ba564b1b28a67e56e88f01"} Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.898371 5012 scope.go:117] "RemoveContainer" containerID="dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.898526 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.935564 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.940677 5012 scope.go:117] "RemoveContainer" containerID="1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.960391 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.969046 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 05:47:02 crc kubenswrapper[5012]: E0219 05:47:02.969656 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0095712-262e-4562-afac-0f2f4372224d" containerName="setup-container" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.969677 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0095712-262e-4562-afac-0f2f4372224d" containerName="setup-container" Feb 19 05:47:02 crc kubenswrapper[5012]: E0219 05:47:02.969718 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0095712-262e-4562-afac-0f2f4372224d" containerName="rabbitmq" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.969727 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0095712-262e-4562-afac-0f2f4372224d" containerName="rabbitmq" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.970017 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0095712-262e-4562-afac-0f2f4372224d" containerName="rabbitmq" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.971453 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.978680 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.980366 5012 scope.go:117] "RemoveContainer" containerID="dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32" Feb 19 05:47:02 crc kubenswrapper[5012]: E0219 05:47:02.981215 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32\": container with ID starting with dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32 not found: ID does not exist" containerID="dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.981245 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32"} err="failed to get container status \"dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32\": rpc error: code = NotFound desc = could not find container \"dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32\": container with ID starting with dc582d079ff6aa58d4fca2b72049a89a8913336121fd065c789fc5d8ab8b5c32 not found: ID does not exist" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.981271 5012 scope.go:117] "RemoveContainer" containerID="1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f" Feb 19 05:47:02 crc kubenswrapper[5012]: E0219 05:47:02.983581 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f\": container with ID starting with 1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f not found: ID does not exist" containerID="1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.983639 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f"} err="failed to get container status \"1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f\": rpc error: code = NotFound desc = could not find container \"1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f\": container with ID starting with 1f607fa42643392d432437053c1d287c4856164a949fc456b001973c4a181f3f not found: ID does not exist" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.988152 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.988430 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.988553 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.988613 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-s7g27" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.988570 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.988562 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 05:47:02 crc kubenswrapper[5012]: I0219 05:47:02.995478 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.086894 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.087446 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3230f97-dbe4-42a2-b009-a8370c601e78-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.087478 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3230f97-dbe4-42a2-b009-a8370c601e78-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.087521 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3230f97-dbe4-42a2-b009-a8370c601e78-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.087544 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.087585 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.087603 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md7tl\" (UniqueName: \"kubernetes.io/projected/c3230f97-dbe4-42a2-b009-a8370c601e78-kube-api-access-md7tl\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.087623 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.087658 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3230f97-dbe4-42a2-b009-a8370c601e78-config-data\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.087706 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3230f97-dbe4-42a2-b009-a8370c601e78-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.087739 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.189815 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.189901 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3230f97-dbe4-42a2-b009-a8370c601e78-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.189936 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3230f97-dbe4-42a2-b009-a8370c601e78-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.189974 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3230f97-dbe4-42a2-b009-a8370c601e78-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.189993 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.190030 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.190052 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md7tl\" (UniqueName: \"kubernetes.io/projected/c3230f97-dbe4-42a2-b009-a8370c601e78-kube-api-access-md7tl\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.190073 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.190102 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3230f97-dbe4-42a2-b009-a8370c601e78-config-data\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.190161 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3230f97-dbe4-42a2-b009-a8370c601e78-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.190192 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.190681 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.190996 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.192892 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.194013 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c3230f97-dbe4-42a2-b009-a8370c601e78-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.194678 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c3230f97-dbe4-42a2-b009-a8370c601e78-config-data\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.194699 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c3230f97-dbe4-42a2-b009-a8370c601e78-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.211283 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.215075 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c3230f97-dbe4-42a2-b009-a8370c601e78-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.218964 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c3230f97-dbe4-42a2-b009-a8370c601e78-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.239162 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md7tl\" (UniqueName: \"kubernetes.io/projected/c3230f97-dbe4-42a2-b009-a8370c601e78-kube-api-access-md7tl\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.239816 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c3230f97-dbe4-42a2-b009-a8370c601e78-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.269675 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"c3230f97-dbe4-42a2-b009-a8370c601e78\") " pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.319609 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.614472 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728443 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-erlang-cookie\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728562 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-confd\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728611 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a13d3004-2045-4daf-a925-7eccf541b1b4-pod-info\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728662 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-tls\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728695 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a13d3004-2045-4daf-a925-7eccf541b1b4-erlang-cookie-secret\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728732 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-plugins\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728760 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zch8n\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-kube-api-access-zch8n\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728786 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728846 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-plugins-conf\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728878 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-server-conf\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.728968 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-config-data\") pod \"a13d3004-2045-4daf-a925-7eccf541b1b4\" (UID: \"a13d3004-2045-4daf-a925-7eccf541b1b4\") " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.732562 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.734012 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a13d3004-2045-4daf-a925-7eccf541b1b4-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.734330 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.734348 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.735720 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.738423 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a13d3004-2045-4daf-a925-7eccf541b1b4-pod-info" (OuterVolumeSpecName: "pod-info") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.738640 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-kube-api-access-zch8n" (OuterVolumeSpecName: "kube-api-access-zch8n") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "kube-api-access-zch8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.742577 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.760450 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-config-data" (OuterVolumeSpecName: "config-data") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.792001 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-server-conf" (OuterVolumeSpecName: "server-conf") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.836861 5012 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.836889 5012 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.836901 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a13d3004-2045-4daf-a925-7eccf541b1b4-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.836910 5012 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.836920 5012 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a13d3004-2045-4daf-a925-7eccf541b1b4-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.836928 5012 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.836936 5012 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a13d3004-2045-4daf-a925-7eccf541b1b4-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.836943 5012 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.836953 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zch8n\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-kube-api-access-zch8n\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.836972 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.858373 5012 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.884754 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a13d3004-2045-4daf-a925-7eccf541b1b4" (UID: "a13d3004-2045-4daf-a925-7eccf541b1b4"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.907989 5012 generic.go:334] "Generic (PLEG): container finished" podID="a13d3004-2045-4daf-a925-7eccf541b1b4" containerID="0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7" exitCode=0 Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.908938 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a13d3004-2045-4daf-a925-7eccf541b1b4","Type":"ContainerDied","Data":"0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7"} Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.909018 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a13d3004-2045-4daf-a925-7eccf541b1b4","Type":"ContainerDied","Data":"856efb676cb6080920d1573427ad1823ab21a0fe78f76dfb2cca62d969151964"} Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.909107 5012 scope.go:117] "RemoveContainer" containerID="0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.909260 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.938920 5012 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a13d3004-2045-4daf-a925-7eccf541b1b4-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.939356 5012 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.946488 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.962831 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.964561 5012 scope.go:117] "RemoveContainer" containerID="0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e" Feb 19 05:47:03 crc kubenswrapper[5012]: I0219 05:47:03.991956 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.001020 5012 scope.go:117] "RemoveContainer" containerID="0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7" Feb 19 05:47:04 crc kubenswrapper[5012]: E0219 05:47:04.001470 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7\": container with ID starting with 0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7 not found: ID does not exist" containerID="0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.001499 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7"} err="failed to get container status \"0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7\": rpc error: code = NotFound desc = could not find container \"0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7\": container with ID starting with 0fd5e28d222ddf0c00042a9db861acdbdefb85ddbf7264845212b5ed042994e7 not found: ID does not exist" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.001515 5012 scope.go:117] "RemoveContainer" containerID="0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.001561 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 05:47:04 crc kubenswrapper[5012]: E0219 05:47:04.002043 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a13d3004-2045-4daf-a925-7eccf541b1b4" containerName="setup-container" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.002061 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a13d3004-2045-4daf-a925-7eccf541b1b4" containerName="setup-container" Feb 19 05:47:04 crc kubenswrapper[5012]: E0219 05:47:04.002069 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a13d3004-2045-4daf-a925-7eccf541b1b4" containerName="rabbitmq" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.002076 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a13d3004-2045-4daf-a925-7eccf541b1b4" containerName="rabbitmq" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.002276 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a13d3004-2045-4daf-a925-7eccf541b1b4" containerName="rabbitmq" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.003624 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: E0219 05:47:04.005095 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e\": container with ID starting with 0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e not found: ID does not exist" containerID="0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.005117 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e"} err="failed to get container status \"0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e\": rpc error: code = NotFound desc = could not find container \"0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e\": container with ID starting with 0979e4041894540f5e165445792b2969f8e19eade6df171733ff24e5678eaf8e not found: ID does not exist" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.006187 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.006314 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.007120 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.007415 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.007793 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.007959 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hd6wk" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.008009 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.014878 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143091 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143141 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143180 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143202 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143224 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143254 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8khhf\" (UniqueName: \"kubernetes.io/projected/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-kube-api-access-8khhf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143362 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143397 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143425 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143453 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.143475 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244697 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244738 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244763 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244783 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244799 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244819 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8khhf\" (UniqueName: \"kubernetes.io/projected/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-kube-api-access-8khhf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244859 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244886 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244906 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244925 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.244944 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.246572 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.247064 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.247261 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.247440 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.247502 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.248005 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.250760 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.252373 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.254215 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.272331 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8khhf\" (UniqueName: \"kubernetes.io/projected/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-kube-api-access-8khhf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.289922 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4984f0c1-33e8-4506-b6d7-e554dca0e4c8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.318566 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4984f0c1-33e8-4506-b6d7-e554dca0e4c8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.477773 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.727437 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a13d3004-2045-4daf-a925-7eccf541b1b4" path="/var/lib/kubelet/pods/a13d3004-2045-4daf-a925-7eccf541b1b4/volumes" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.728849 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0095712-262e-4562-afac-0f2f4372224d" path="/var/lib/kubelet/pods/b0095712-262e-4562-afac-0f2f4372224d/volumes" Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.932586 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c3230f97-dbe4-42a2-b009-a8370c601e78","Type":"ContainerStarted","Data":"d354f62d44e5f63a95b6b079a247b5cc3acc6fb019f9aee96c3454d111e96a36"} Feb 19 05:47:04 crc kubenswrapper[5012]: I0219 05:47:04.988222 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 05:47:05 crc kubenswrapper[5012]: I0219 05:47:05.943371 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4984f0c1-33e8-4506-b6d7-e554dca0e4c8","Type":"ContainerStarted","Data":"e5696085150a0dd54f0403f350a377e9916faf97409e0b34d5b0f533eb7c29d9"} Feb 19 05:47:05 crc kubenswrapper[5012]: I0219 05:47:05.945687 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c3230f97-dbe4-42a2-b009-a8370c601e78","Type":"ContainerStarted","Data":"5b94da6e2f63b65256a5323ada20105efbf8a87206940d39e3ae90200a8c11c8"} Feb 19 05:47:06 crc kubenswrapper[5012]: I0219 05:47:06.958994 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4984f0c1-33e8-4506-b6d7-e554dca0e4c8","Type":"ContainerStarted","Data":"efce8bcde4f16e1cdff511b08b15a8a5dfb4bcdfa22431bcd1cde7bae1124379"} Feb 19 05:47:10 crc kubenswrapper[5012]: I0219 05:47:10.971756 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-645c76756f-nk9vx"] Feb 19 05:47:10 crc kubenswrapper[5012]: I0219 05:47:10.975528 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:10 crc kubenswrapper[5012]: I0219 05:47:10.978255 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 19 05:47:10 crc kubenswrapper[5012]: I0219 05:47:10.981892 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-645c76756f-nk9vx"] Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.024242 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7k6c\" (UniqueName: \"kubernetes.io/projected/5e2a5c46-de05-416e-886e-f52dadc04d9f-kube-api-access-z7k6c\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.024546 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-config\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.039746 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-svc\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.039839 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-openstack-edpm-ipam\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.040079 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-sb\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.040133 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-nb\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.040187 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-swift-storage-0\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.142522 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-svc\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.142609 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-openstack-edpm-ipam\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.142755 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-sb\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.142799 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-nb\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.142844 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-swift-storage-0\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.142919 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7k6c\" (UniqueName: \"kubernetes.io/projected/5e2a5c46-de05-416e-886e-f52dadc04d9f-kube-api-access-z7k6c\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.143009 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-config\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.144610 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-openstack-edpm-ipam\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.144622 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-svc\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.145442 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-config\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.145842 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-nb\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.146565 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-sb\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.147591 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-swift-storage-0\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.170420 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7k6c\" (UniqueName: \"kubernetes.io/projected/5e2a5c46-de05-416e-886e-f52dadc04d9f-kube-api-access-z7k6c\") pod \"dnsmasq-dns-645c76756f-nk9vx\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.319938 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:11 crc kubenswrapper[5012]: I0219 05:47:11.648258 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-645c76756f-nk9vx"] Feb 19 05:47:12 crc kubenswrapper[5012]: I0219 05:47:12.068539 5012 generic.go:334] "Generic (PLEG): container finished" podID="5e2a5c46-de05-416e-886e-f52dadc04d9f" containerID="edfd4a155bf6b965b21fa78e413f2f1a7e00b71bc7aa7c9a548683c08bce3705" exitCode=0 Feb 19 05:47:12 crc kubenswrapper[5012]: I0219 05:47:12.068625 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" event={"ID":"5e2a5c46-de05-416e-886e-f52dadc04d9f","Type":"ContainerDied","Data":"edfd4a155bf6b965b21fa78e413f2f1a7e00b71bc7aa7c9a548683c08bce3705"} Feb 19 05:47:12 crc kubenswrapper[5012]: I0219 05:47:12.068860 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" event={"ID":"5e2a5c46-de05-416e-886e-f52dadc04d9f","Type":"ContainerStarted","Data":"69c77bc6b304c47cc6f82821180fd1b3759319ae800b54ff837c06429e637adf"} Feb 19 05:47:13 crc kubenswrapper[5012]: I0219 05:47:13.085970 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" event={"ID":"5e2a5c46-de05-416e-886e-f52dadc04d9f","Type":"ContainerStarted","Data":"8c2da3852706a5c0cbb7c3369248406b0a90c7dfcb08b14d1ad6a6b5c78521d6"} Feb 19 05:47:13 crc kubenswrapper[5012]: I0219 05:47:13.086775 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:13 crc kubenswrapper[5012]: I0219 05:47:13.128690 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" podStartSLOduration=3.128669109 podStartE2EDuration="3.128669109s" podCreationTimestamp="2026-02-19 05:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:47:13.119579778 +0000 UTC m=+1329.152902387" watchObservedRunningTime="2026-02-19 05:47:13.128669109 +0000 UTC m=+1329.161991688" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.322625 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.459107 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85446bf977-vzlgl"] Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.461513 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" podUID="0ee4ae6f-65e3-4467-8302-54381eeebd5a" containerName="dnsmasq-dns" containerID="cri-o://d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498" gracePeriod=10 Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.602801 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-567c7bc999-cgf2v"] Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.604455 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.625025 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-567c7bc999-cgf2v"] Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.712082 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-ovsdbserver-sb\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.712136 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x65g8\" (UniqueName: \"kubernetes.io/projected/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-kube-api-access-x65g8\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.712161 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-config\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.712183 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-dns-svc\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.712237 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-dns-swift-storage-0\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.712252 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-openstack-edpm-ipam\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.712281 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-ovsdbserver-nb\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.814135 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-dns-swift-storage-0\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.814173 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-openstack-edpm-ipam\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.814218 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-ovsdbserver-nb\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.814857 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-ovsdbserver-sb\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.814998 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x65g8\" (UniqueName: \"kubernetes.io/projected/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-kube-api-access-x65g8\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.815005 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-openstack-edpm-ipam\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.815086 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-config\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.815161 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-dns-svc\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.816324 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-ovsdbserver-sb\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.816539 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-dns-svc\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.816695 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-config\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.816759 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-dns-swift-storage-0\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.816906 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-ovsdbserver-nb\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.853756 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x65g8\" (UniqueName: \"kubernetes.io/projected/c2eab861-ab13-4ab1-b57f-fecf9e95b9be-kube-api-access-x65g8\") pod \"dnsmasq-dns-567c7bc999-cgf2v\" (UID: \"c2eab861-ab13-4ab1-b57f-fecf9e95b9be\") " pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:21 crc kubenswrapper[5012]: I0219 05:47:21.952566 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.034005 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.121424 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-sb\") pod \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.121481 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-swift-storage-0\") pod \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.121522 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-svc\") pod \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.121605 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-nb\") pod \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.121678 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb4s2\" (UniqueName: \"kubernetes.io/projected/0ee4ae6f-65e3-4467-8302-54381eeebd5a-kube-api-access-nb4s2\") pod \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.121743 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-config\") pod \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\" (UID: \"0ee4ae6f-65e3-4467-8302-54381eeebd5a\") " Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.130916 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ee4ae6f-65e3-4467-8302-54381eeebd5a-kube-api-access-nb4s2" (OuterVolumeSpecName: "kube-api-access-nb4s2") pod "0ee4ae6f-65e3-4467-8302-54381eeebd5a" (UID: "0ee4ae6f-65e3-4467-8302-54381eeebd5a"). InnerVolumeSpecName "kube-api-access-nb4s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.173432 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ee4ae6f-65e3-4467-8302-54381eeebd5a" (UID: "0ee4ae6f-65e3-4467-8302-54381eeebd5a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.176809 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0ee4ae6f-65e3-4467-8302-54381eeebd5a" (UID: "0ee4ae6f-65e3-4467-8302-54381eeebd5a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.183284 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-config" (OuterVolumeSpecName: "config") pod "0ee4ae6f-65e3-4467-8302-54381eeebd5a" (UID: "0ee4ae6f-65e3-4467-8302-54381eeebd5a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.191195 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0ee4ae6f-65e3-4467-8302-54381eeebd5a" (UID: "0ee4ae6f-65e3-4467-8302-54381eeebd5a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.194962 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0ee4ae6f-65e3-4467-8302-54381eeebd5a" (UID: "0ee4ae6f-65e3-4467-8302-54381eeebd5a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.196256 5012 generic.go:334] "Generic (PLEG): container finished" podID="0ee4ae6f-65e3-4467-8302-54381eeebd5a" containerID="d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498" exitCode=0 Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.196294 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" event={"ID":"0ee4ae6f-65e3-4467-8302-54381eeebd5a","Type":"ContainerDied","Data":"d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498"} Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.196333 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" event={"ID":"0ee4ae6f-65e3-4467-8302-54381eeebd5a","Type":"ContainerDied","Data":"76c330a33b78602a2e427fa0cfc346da48f97fdaa4760b156caa3f21371da964"} Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.196349 5012 scope.go:117] "RemoveContainer" containerID="d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.196472 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85446bf977-vzlgl" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.224807 5012 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.224853 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.224872 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.224888 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb4s2\" (UniqueName: \"kubernetes.io/projected/0ee4ae6f-65e3-4467-8302-54381eeebd5a-kube-api-access-nb4s2\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.224906 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.224924 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ee4ae6f-65e3-4467-8302-54381eeebd5a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.225829 5012 scope.go:117] "RemoveContainer" containerID="26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.239423 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85446bf977-vzlgl"] Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.247780 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85446bf977-vzlgl"] Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.256450 5012 scope.go:117] "RemoveContainer" containerID="d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498" Feb 19 05:47:22 crc kubenswrapper[5012]: E0219 05:47:22.256908 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498\": container with ID starting with d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498 not found: ID does not exist" containerID="d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.256938 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498"} err="failed to get container status \"d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498\": rpc error: code = NotFound desc = could not find container \"d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498\": container with ID starting with d7ff4528b5199ee58a0ac98408a5f7e44d69f5d5d3f29454dea4ee6d0e4d1498 not found: ID does not exist" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.256958 5012 scope.go:117] "RemoveContainer" containerID="26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488" Feb 19 05:47:22 crc kubenswrapper[5012]: E0219 05:47:22.257369 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488\": container with ID starting with 26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488 not found: ID does not exist" containerID="26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.257403 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488"} err="failed to get container status \"26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488\": rpc error: code = NotFound desc = could not find container \"26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488\": container with ID starting with 26e6be7343745e28defdfb95dbedf13ef550406a9416d8051b48f973b501b488 not found: ID does not exist" Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.403853 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-567c7bc999-cgf2v"] Feb 19 05:47:22 crc kubenswrapper[5012]: I0219 05:47:22.714364 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ee4ae6f-65e3-4467-8302-54381eeebd5a" path="/var/lib/kubelet/pods/0ee4ae6f-65e3-4467-8302-54381eeebd5a/volumes" Feb 19 05:47:23 crc kubenswrapper[5012]: I0219 05:47:23.228292 5012 generic.go:334] "Generic (PLEG): container finished" podID="c2eab861-ab13-4ab1-b57f-fecf9e95b9be" containerID="25dd4695daea4f17fcb807b874ac4e301bc313a41f0e67b92a9e96821a21a22b" exitCode=0 Feb 19 05:47:23 crc kubenswrapper[5012]: I0219 05:47:23.228384 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" event={"ID":"c2eab861-ab13-4ab1-b57f-fecf9e95b9be","Type":"ContainerDied","Data":"25dd4695daea4f17fcb807b874ac4e301bc313a41f0e67b92a9e96821a21a22b"} Feb 19 05:47:23 crc kubenswrapper[5012]: I0219 05:47:23.228424 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" event={"ID":"c2eab861-ab13-4ab1-b57f-fecf9e95b9be","Type":"ContainerStarted","Data":"cbf8ac96d7b31681d13f7cfef4554591ea01771a74e619e1f160646a3c89d3a4"} Feb 19 05:47:24 crc kubenswrapper[5012]: I0219 05:47:24.251547 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" event={"ID":"c2eab861-ab13-4ab1-b57f-fecf9e95b9be","Type":"ContainerStarted","Data":"875d6defbdeb9cb80a35af95c4c12714cb388e66161dba52f95953708cc102c9"} Feb 19 05:47:24 crc kubenswrapper[5012]: I0219 05:47:24.251783 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:24 crc kubenswrapper[5012]: I0219 05:47:24.286904 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" podStartSLOduration=3.286885845 podStartE2EDuration="3.286885845s" podCreationTimestamp="2026-02-19 05:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:47:24.283357019 +0000 UTC m=+1340.316679588" watchObservedRunningTime="2026-02-19 05:47:24.286885845 +0000 UTC m=+1340.320208414" Feb 19 05:47:31 crc kubenswrapper[5012]: I0219 05:47:31.954569 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-567c7bc999-cgf2v" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.067944 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-645c76756f-nk9vx"] Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.068253 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" podUID="5e2a5c46-de05-416e-886e-f52dadc04d9f" containerName="dnsmasq-dns" containerID="cri-o://8c2da3852706a5c0cbb7c3369248406b0a90c7dfcb08b14d1ad6a6b5c78521d6" gracePeriod=10 Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.358902 5012 generic.go:334] "Generic (PLEG): container finished" podID="5e2a5c46-de05-416e-886e-f52dadc04d9f" containerID="8c2da3852706a5c0cbb7c3369248406b0a90c7dfcb08b14d1ad6a6b5c78521d6" exitCode=0 Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.358978 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" event={"ID":"5e2a5c46-de05-416e-886e-f52dadc04d9f","Type":"ContainerDied","Data":"8c2da3852706a5c0cbb7c3369248406b0a90c7dfcb08b14d1ad6a6b5c78521d6"} Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.747054 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.887617 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-nb\") pod \"5e2a5c46-de05-416e-886e-f52dadc04d9f\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.887683 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-swift-storage-0\") pod \"5e2a5c46-de05-416e-886e-f52dadc04d9f\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.887724 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-sb\") pod \"5e2a5c46-de05-416e-886e-f52dadc04d9f\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.887749 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-config\") pod \"5e2a5c46-de05-416e-886e-f52dadc04d9f\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.887865 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-openstack-edpm-ipam\") pod \"5e2a5c46-de05-416e-886e-f52dadc04d9f\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.888063 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-svc\") pod \"5e2a5c46-de05-416e-886e-f52dadc04d9f\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.888142 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7k6c\" (UniqueName: \"kubernetes.io/projected/5e2a5c46-de05-416e-886e-f52dadc04d9f-kube-api-access-z7k6c\") pod \"5e2a5c46-de05-416e-886e-f52dadc04d9f\" (UID: \"5e2a5c46-de05-416e-886e-f52dadc04d9f\") " Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.903245 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e2a5c46-de05-416e-886e-f52dadc04d9f-kube-api-access-z7k6c" (OuterVolumeSpecName: "kube-api-access-z7k6c") pod "5e2a5c46-de05-416e-886e-f52dadc04d9f" (UID: "5e2a5c46-de05-416e-886e-f52dadc04d9f"). InnerVolumeSpecName "kube-api-access-z7k6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.957656 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5e2a5c46-de05-416e-886e-f52dadc04d9f" (UID: "5e2a5c46-de05-416e-886e-f52dadc04d9f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.960629 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "5e2a5c46-de05-416e-886e-f52dadc04d9f" (UID: "5e2a5c46-de05-416e-886e-f52dadc04d9f"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.963656 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5e2a5c46-de05-416e-886e-f52dadc04d9f" (UID: "5e2a5c46-de05-416e-886e-f52dadc04d9f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.964826 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5e2a5c46-de05-416e-886e-f52dadc04d9f" (UID: "5e2a5c46-de05-416e-886e-f52dadc04d9f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.965158 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-config" (OuterVolumeSpecName: "config") pod "5e2a5c46-de05-416e-886e-f52dadc04d9f" (UID: "5e2a5c46-de05-416e-886e-f52dadc04d9f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.992126 5012 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.992158 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7k6c\" (UniqueName: \"kubernetes.io/projected/5e2a5c46-de05-416e-886e-f52dadc04d9f-kube-api-access-z7k6c\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.992175 5012 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.992185 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.992194 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-config\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:32 crc kubenswrapper[5012]: I0219 05:47:32.992203 5012 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:33 crc kubenswrapper[5012]: I0219 05:47:33.003246 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5e2a5c46-de05-416e-886e-f52dadc04d9f" (UID: "5e2a5c46-de05-416e-886e-f52dadc04d9f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:47:33 crc kubenswrapper[5012]: I0219 05:47:33.093049 5012 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e2a5c46-de05-416e-886e-f52dadc04d9f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 05:47:33 crc kubenswrapper[5012]: I0219 05:47:33.372999 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" event={"ID":"5e2a5c46-de05-416e-886e-f52dadc04d9f","Type":"ContainerDied","Data":"69c77bc6b304c47cc6f82821180fd1b3759319ae800b54ff837c06429e637adf"} Feb 19 05:47:33 crc kubenswrapper[5012]: I0219 05:47:33.373055 5012 scope.go:117] "RemoveContainer" containerID="8c2da3852706a5c0cbb7c3369248406b0a90c7dfcb08b14d1ad6a6b5c78521d6" Feb 19 05:47:33 crc kubenswrapper[5012]: I0219 05:47:33.373081 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-645c76756f-nk9vx" Feb 19 05:47:33 crc kubenswrapper[5012]: I0219 05:47:33.400779 5012 scope.go:117] "RemoveContainer" containerID="edfd4a155bf6b965b21fa78e413f2f1a7e00b71bc7aa7c9a548683c08bce3705" Feb 19 05:47:33 crc kubenswrapper[5012]: I0219 05:47:33.428048 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-645c76756f-nk9vx"] Feb 19 05:47:33 crc kubenswrapper[5012]: I0219 05:47:33.440849 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-645c76756f-nk9vx"] Feb 19 05:47:34 crc kubenswrapper[5012]: I0219 05:47:34.718379 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e2a5c46-de05-416e-886e-f52dadc04d9f" path="/var/lib/kubelet/pods/5e2a5c46-de05-416e-886e-f52dadc04d9f/volumes" Feb 19 05:47:39 crc kubenswrapper[5012]: I0219 05:47:39.430290 5012 generic.go:334] "Generic (PLEG): container finished" podID="c3230f97-dbe4-42a2-b009-a8370c601e78" containerID="5b94da6e2f63b65256a5323ada20105efbf8a87206940d39e3ae90200a8c11c8" exitCode=0 Feb 19 05:47:39 crc kubenswrapper[5012]: I0219 05:47:39.430342 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c3230f97-dbe4-42a2-b009-a8370c601e78","Type":"ContainerDied","Data":"5b94da6e2f63b65256a5323ada20105efbf8a87206940d39e3ae90200a8c11c8"} Feb 19 05:47:40 crc kubenswrapper[5012]: I0219 05:47:40.440940 5012 generic.go:334] "Generic (PLEG): container finished" podID="4984f0c1-33e8-4506-b6d7-e554dca0e4c8" containerID="efce8bcde4f16e1cdff511b08b15a8a5dfb4bcdfa22431bcd1cde7bae1124379" exitCode=0 Feb 19 05:47:40 crc kubenswrapper[5012]: I0219 05:47:40.441092 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4984f0c1-33e8-4506-b6d7-e554dca0e4c8","Type":"ContainerDied","Data":"efce8bcde4f16e1cdff511b08b15a8a5dfb4bcdfa22431bcd1cde7bae1124379"} Feb 19 05:47:40 crc kubenswrapper[5012]: I0219 05:47:40.446970 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c3230f97-dbe4-42a2-b009-a8370c601e78","Type":"ContainerStarted","Data":"2216d27f73b168891c63b5b2774965132a7c9688deeb594eb17587339fcce48f"} Feb 19 05:47:40 crc kubenswrapper[5012]: I0219 05:47:40.447217 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 05:47:40 crc kubenswrapper[5012]: I0219 05:47:40.505962 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.505939343 podStartE2EDuration="38.505939343s" podCreationTimestamp="2026-02-19 05:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:47:40.499044936 +0000 UTC m=+1356.532367505" watchObservedRunningTime="2026-02-19 05:47:40.505939343 +0000 UTC m=+1356.539261912" Feb 19 05:47:41 crc kubenswrapper[5012]: I0219 05:47:41.458407 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4984f0c1-33e8-4506-b6d7-e554dca0e4c8","Type":"ContainerStarted","Data":"aa4c8b16f0bd68c5ef564e8d9831e5c6c5e141f31a19bfd116c23ecaf084cec4"} Feb 19 05:47:41 crc kubenswrapper[5012]: I0219 05:47:41.459272 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:41 crc kubenswrapper[5012]: I0219 05:47:41.495014 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.494991434 podStartE2EDuration="38.494991434s" podCreationTimestamp="2026-02-19 05:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 05:47:41.489354807 +0000 UTC m=+1357.522677386" watchObservedRunningTime="2026-02-19 05:47:41.494991434 +0000 UTC m=+1357.528314013" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.720977 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267"] Feb 19 05:47:45 crc kubenswrapper[5012]: E0219 05:47:45.721763 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee4ae6f-65e3-4467-8302-54381eeebd5a" containerName="dnsmasq-dns" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.721775 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee4ae6f-65e3-4467-8302-54381eeebd5a" containerName="dnsmasq-dns" Feb 19 05:47:45 crc kubenswrapper[5012]: E0219 05:47:45.721788 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e2a5c46-de05-416e-886e-f52dadc04d9f" containerName="init" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.721796 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e2a5c46-de05-416e-886e-f52dadc04d9f" containerName="init" Feb 19 05:47:45 crc kubenswrapper[5012]: E0219 05:47:45.721803 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e2a5c46-de05-416e-886e-f52dadc04d9f" containerName="dnsmasq-dns" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.721809 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e2a5c46-de05-416e-886e-f52dadc04d9f" containerName="dnsmasq-dns" Feb 19 05:47:45 crc kubenswrapper[5012]: E0219 05:47:45.721833 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee4ae6f-65e3-4467-8302-54381eeebd5a" containerName="init" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.721838 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee4ae6f-65e3-4467-8302-54381eeebd5a" containerName="init" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.722042 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e2a5c46-de05-416e-886e-f52dadc04d9f" containerName="dnsmasq-dns" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.722069 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ee4ae6f-65e3-4467-8302-54381eeebd5a" containerName="dnsmasq-dns" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.722736 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.725449 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.725645 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.725742 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.725742 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.735236 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267"] Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.805036 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.805130 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.805190 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rsr4\" (UniqueName: \"kubernetes.io/projected/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-kube-api-access-4rsr4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:45 crc kubenswrapper[5012]: I0219 05:47:45.805367 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:46 crc kubenswrapper[5012]: I0219 05:47:46.249879 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:46 crc kubenswrapper[5012]: I0219 05:47:46.249946 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:46 crc kubenswrapper[5012]: I0219 05:47:46.249990 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rsr4\" (UniqueName: \"kubernetes.io/projected/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-kube-api-access-4rsr4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:46 crc kubenswrapper[5012]: I0219 05:47:46.250080 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:46 crc kubenswrapper[5012]: I0219 05:47:46.256171 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:46 crc kubenswrapper[5012]: I0219 05:47:46.256761 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:46 crc kubenswrapper[5012]: I0219 05:47:46.259746 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:46 crc kubenswrapper[5012]: I0219 05:47:46.274585 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rsr4\" (UniqueName: \"kubernetes.io/projected/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-kube-api-access-4rsr4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-pl267\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:46 crc kubenswrapper[5012]: I0219 05:47:46.385144 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:47:47 crc kubenswrapper[5012]: I0219 05:47:47.149527 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267"] Feb 19 05:47:47 crc kubenswrapper[5012]: I0219 05:47:47.539088 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" event={"ID":"61bd41ab-cfea-4df2-9be0-8321c6c11ebd","Type":"ContainerStarted","Data":"312fd7dc0497e3a4381040e75f8869b7855c91d4636c392e802e463c727cce3d"} Feb 19 05:47:53 crc kubenswrapper[5012]: I0219 05:47:53.324692 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 05:47:54 crc kubenswrapper[5012]: I0219 05:47:54.481456 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 05:47:57 crc kubenswrapper[5012]: I0219 05:47:57.658260 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" event={"ID":"61bd41ab-cfea-4df2-9be0-8321c6c11ebd","Type":"ContainerStarted","Data":"5c5c63555870264dd8f5829bdd82139299500a7348f6bff770b40402e6e4c5e7"} Feb 19 05:47:57 crc kubenswrapper[5012]: I0219 05:47:57.672401 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" podStartSLOduration=2.487760755 podStartE2EDuration="12.672381586s" podCreationTimestamp="2026-02-19 05:47:45 +0000 UTC" firstStartedPulling="2026-02-19 05:47:47.158258126 +0000 UTC m=+1363.191580695" lastFinishedPulling="2026-02-19 05:47:57.342878917 +0000 UTC m=+1373.376201526" observedRunningTime="2026-02-19 05:47:57.671541816 +0000 UTC m=+1373.704864385" watchObservedRunningTime="2026-02-19 05:47:57.672381586 +0000 UTC m=+1373.705704175" Feb 19 05:48:08 crc kubenswrapper[5012]: I0219 05:48:08.794585 5012 generic.go:334] "Generic (PLEG): container finished" podID="61bd41ab-cfea-4df2-9be0-8321c6c11ebd" containerID="5c5c63555870264dd8f5829bdd82139299500a7348f6bff770b40402e6e4c5e7" exitCode=0 Feb 19 05:48:08 crc kubenswrapper[5012]: I0219 05:48:08.794696 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" event={"ID":"61bd41ab-cfea-4df2-9be0-8321c6c11ebd","Type":"ContainerDied","Data":"5c5c63555870264dd8f5829bdd82139299500a7348f6bff770b40402e6e4c5e7"} Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.330587 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.410956 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-repo-setup-combined-ca-bundle\") pod \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.411020 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rsr4\" (UniqueName: \"kubernetes.io/projected/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-kube-api-access-4rsr4\") pod \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.411125 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-inventory\") pod \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.411253 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-ssh-key-openstack-edpm-ipam\") pod \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\" (UID: \"61bd41ab-cfea-4df2-9be0-8321c6c11ebd\") " Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.419597 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "61bd41ab-cfea-4df2-9be0-8321c6c11ebd" (UID: "61bd41ab-cfea-4df2-9be0-8321c6c11ebd"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.424092 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-kube-api-access-4rsr4" (OuterVolumeSpecName: "kube-api-access-4rsr4") pod "61bd41ab-cfea-4df2-9be0-8321c6c11ebd" (UID: "61bd41ab-cfea-4df2-9be0-8321c6c11ebd"). InnerVolumeSpecName "kube-api-access-4rsr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.459985 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "61bd41ab-cfea-4df2-9be0-8321c6c11ebd" (UID: "61bd41ab-cfea-4df2-9be0-8321c6c11ebd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.462081 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-inventory" (OuterVolumeSpecName: "inventory") pod "61bd41ab-cfea-4df2-9be0-8321c6c11ebd" (UID: "61bd41ab-cfea-4df2-9be0-8321c6c11ebd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.514767 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.514806 5012 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.514821 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rsr4\" (UniqueName: \"kubernetes.io/projected/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-kube-api-access-4rsr4\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.514834 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61bd41ab-cfea-4df2-9be0-8321c6c11ebd-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.831603 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" event={"ID":"61bd41ab-cfea-4df2-9be0-8321c6c11ebd","Type":"ContainerDied","Data":"312fd7dc0497e3a4381040e75f8869b7855c91d4636c392e802e463c727cce3d"} Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.832042 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="312fd7dc0497e3a4381040e75f8869b7855c91d4636c392e802e463c727cce3d" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.832127 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-pl267" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.924165 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd"] Feb 19 05:48:10 crc kubenswrapper[5012]: E0219 05:48:10.924572 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61bd41ab-cfea-4df2-9be0-8321c6c11ebd" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.924588 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="61bd41ab-cfea-4df2-9be0-8321c6c11ebd" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.924765 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="61bd41ab-cfea-4df2-9be0-8321c6c11ebd" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.925397 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.928335 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.928567 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.928760 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.929783 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:48:10 crc kubenswrapper[5012]: I0219 05:48:10.942322 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd"] Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.028865 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj5p7\" (UniqueName: \"kubernetes.io/projected/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-kube-api-access-wj5p7\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-skvzd\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.029016 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-skvzd\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.029063 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-skvzd\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.130719 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-skvzd\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.131002 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-skvzd\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.131215 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj5p7\" (UniqueName: \"kubernetes.io/projected/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-kube-api-access-wj5p7\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-skvzd\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.134725 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-skvzd\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.147820 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj5p7\" (UniqueName: \"kubernetes.io/projected/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-kube-api-access-wj5p7\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-skvzd\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.148031 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-skvzd\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.242478 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.786821 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd"] Feb 19 05:48:11 crc kubenswrapper[5012]: I0219 05:48:11.852155 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" event={"ID":"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf","Type":"ContainerStarted","Data":"ebc32bcde894477353387c9790c11ae4abdee0c0ff7499b7ab0358220f947c8f"} Feb 19 05:48:12 crc kubenswrapper[5012]: I0219 05:48:12.869112 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" event={"ID":"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf","Type":"ContainerStarted","Data":"46af40643823a33247a982604a9c72359b1d69b943d4655962b2a8e91ff0bdef"} Feb 19 05:48:12 crc kubenswrapper[5012]: I0219 05:48:12.892888 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" podStartSLOduration=2.443493901 podStartE2EDuration="2.892862598s" podCreationTimestamp="2026-02-19 05:48:10 +0000 UTC" firstStartedPulling="2026-02-19 05:48:11.793922443 +0000 UTC m=+1387.827245012" lastFinishedPulling="2026-02-19 05:48:12.24329114 +0000 UTC m=+1388.276613709" observedRunningTime="2026-02-19 05:48:12.888468821 +0000 UTC m=+1388.921791430" watchObservedRunningTime="2026-02-19 05:48:12.892862598 +0000 UTC m=+1388.926185207" Feb 19 05:48:15 crc kubenswrapper[5012]: I0219 05:48:15.903136 5012 generic.go:334] "Generic (PLEG): container finished" podID="07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf" containerID="46af40643823a33247a982604a9c72359b1d69b943d4655962b2a8e91ff0bdef" exitCode=0 Feb 19 05:48:15 crc kubenswrapper[5012]: I0219 05:48:15.903546 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" event={"ID":"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf","Type":"ContainerDied","Data":"46af40643823a33247a982604a9c72359b1d69b943d4655962b2a8e91ff0bdef"} Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.445652 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.632197 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-inventory\") pod \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.632399 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-ssh-key-openstack-edpm-ipam\") pod \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.633539 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj5p7\" (UniqueName: \"kubernetes.io/projected/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-kube-api-access-wj5p7\") pod \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\" (UID: \"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf\") " Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.658644 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-kube-api-access-wj5p7" (OuterVolumeSpecName: "kube-api-access-wj5p7") pod "07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf" (UID: "07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf"). InnerVolumeSpecName "kube-api-access-wj5p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.680048 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf" (UID: "07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.691199 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-inventory" (OuterVolumeSpecName: "inventory") pod "07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf" (UID: "07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.737633 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.737694 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.737717 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj5p7\" (UniqueName: \"kubernetes.io/projected/07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf-kube-api-access-wj5p7\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.930425 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" event={"ID":"07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf","Type":"ContainerDied","Data":"ebc32bcde894477353387c9790c11ae4abdee0c0ff7499b7ab0358220f947c8f"} Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.930494 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebc32bcde894477353387c9790c11ae4abdee0c0ff7499b7ab0358220f947c8f" Feb 19 05:48:17 crc kubenswrapper[5012]: I0219 05:48:17.930510 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-skvzd" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.064387 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb"] Feb 19 05:48:18 crc kubenswrapper[5012]: E0219 05:48:18.064857 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.064871 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.065088 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.065777 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.069332 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v85qw\" (UniqueName: \"kubernetes.io/projected/ebf47868-aec9-4f2e-8c08-499161f45b18-kube-api-access-v85qw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.069395 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.069488 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.069667 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.069695 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.069727 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.070019 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.070075 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.103366 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb"] Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.171256 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v85qw\" (UniqueName: \"kubernetes.io/projected/ebf47868-aec9-4f2e-8c08-499161f45b18-kube-api-access-v85qw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.171406 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.171470 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.171493 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.175673 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.175730 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.176929 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.200584 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v85qw\" (UniqueName: \"kubernetes.io/projected/ebf47868-aec9-4f2e-8c08-499161f45b18-kube-api-access-v85qw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:18 crc kubenswrapper[5012]: I0219 05:48:18.393427 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:48:19 crc kubenswrapper[5012]: I0219 05:48:19.100956 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb"] Feb 19 05:48:19 crc kubenswrapper[5012]: I0219 05:48:19.968585 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" event={"ID":"ebf47868-aec9-4f2e-8c08-499161f45b18","Type":"ContainerStarted","Data":"29f44073d44e2b9740f67ef79845a00f042a9a8a60fe1de16bde1fbb0612c36e"} Feb 19 05:48:19 crc kubenswrapper[5012]: I0219 05:48:19.969746 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" event={"ID":"ebf47868-aec9-4f2e-8c08-499161f45b18","Type":"ContainerStarted","Data":"47a7d8825e1acf3d49de6e07e3e26af34d28c13a97cb0ebcbb15be03c70da6f3"} Feb 19 05:48:19 crc kubenswrapper[5012]: I0219 05:48:19.997891 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" podStartSLOduration=1.611738653 podStartE2EDuration="1.997867471s" podCreationTimestamp="2026-02-19 05:48:18 +0000 UTC" firstStartedPulling="2026-02-19 05:48:19.094621937 +0000 UTC m=+1395.127944546" lastFinishedPulling="2026-02-19 05:48:19.480750755 +0000 UTC m=+1395.514073364" observedRunningTime="2026-02-19 05:48:19.989872076 +0000 UTC m=+1396.023194655" watchObservedRunningTime="2026-02-19 05:48:19.997867471 +0000 UTC m=+1396.031190040" Feb 19 05:48:32 crc kubenswrapper[5012]: I0219 05:48:32.835821 5012 scope.go:117] "RemoveContainer" containerID="8f0dc1aa57e08411f9d0f619e65ecab31defd41e57bdd287ce850d95e5dc2423" Feb 19 05:48:32 crc kubenswrapper[5012]: I0219 05:48:32.880224 5012 scope.go:117] "RemoveContainer" containerID="abc0139cac003d44d29c14053f3981b5bda18d4f49ee4f01ff970a93700f4fc7" Feb 19 05:48:32 crc kubenswrapper[5012]: I0219 05:48:32.952953 5012 scope.go:117] "RemoveContainer" containerID="d8e57b0f2b52b5aa983f227ca12d7b7d13d90cca4cada2357120cb84084b1554" Feb 19 05:48:32 crc kubenswrapper[5012]: I0219 05:48:32.995801 5012 scope.go:117] "RemoveContainer" containerID="110e2fb48dbdbaaee96e12fd6145e56296c9e6c4ec3ed95da58954f821868b52" Feb 19 05:48:34 crc kubenswrapper[5012]: I0219 05:48:34.931250 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dm5jf"] Feb 19 05:48:34 crc kubenswrapper[5012]: I0219 05:48:34.938071 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:34 crc kubenswrapper[5012]: I0219 05:48:34.947522 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dm5jf"] Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.086099 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-utilities\") pod \"redhat-operators-dm5jf\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.086198 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6mlb\" (UniqueName: \"kubernetes.io/projected/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-kube-api-access-q6mlb\") pod \"redhat-operators-dm5jf\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.086292 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-catalog-content\") pod \"redhat-operators-dm5jf\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.188654 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6mlb\" (UniqueName: \"kubernetes.io/projected/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-kube-api-access-q6mlb\") pod \"redhat-operators-dm5jf\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.188783 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-catalog-content\") pod \"redhat-operators-dm5jf\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.188900 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-utilities\") pod \"redhat-operators-dm5jf\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.189390 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-utilities\") pod \"redhat-operators-dm5jf\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.190000 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-catalog-content\") pod \"redhat-operators-dm5jf\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.214118 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6mlb\" (UniqueName: \"kubernetes.io/projected/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-kube-api-access-q6mlb\") pod \"redhat-operators-dm5jf\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.310327 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:35 crc kubenswrapper[5012]: I0219 05:48:35.791087 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dm5jf"] Feb 19 05:48:36 crc kubenswrapper[5012]: I0219 05:48:36.176572 5012 generic.go:334] "Generic (PLEG): container finished" podID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerID="96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481" exitCode=0 Feb 19 05:48:36 crc kubenswrapper[5012]: I0219 05:48:36.176622 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dm5jf" event={"ID":"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4","Type":"ContainerDied","Data":"96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481"} Feb 19 05:48:36 crc kubenswrapper[5012]: I0219 05:48:36.176675 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dm5jf" event={"ID":"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4","Type":"ContainerStarted","Data":"77be9753ec657fff700d5aa8c08179d08b1e0427037a7558362bc489f12b2bf9"} Feb 19 05:48:38 crc kubenswrapper[5012]: I0219 05:48:38.201686 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dm5jf" event={"ID":"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4","Type":"ContainerStarted","Data":"4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc"} Feb 19 05:48:40 crc kubenswrapper[5012]: I0219 05:48:40.225189 5012 generic.go:334] "Generic (PLEG): container finished" podID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerID="4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc" exitCode=0 Feb 19 05:48:40 crc kubenswrapper[5012]: I0219 05:48:40.225285 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dm5jf" event={"ID":"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4","Type":"ContainerDied","Data":"4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc"} Feb 19 05:48:41 crc kubenswrapper[5012]: I0219 05:48:41.239319 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dm5jf" event={"ID":"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4","Type":"ContainerStarted","Data":"77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69"} Feb 19 05:48:41 crc kubenswrapper[5012]: I0219 05:48:41.266390 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dm5jf" podStartSLOduration=2.8300190069999998 podStartE2EDuration="7.266370789s" podCreationTimestamp="2026-02-19 05:48:34 +0000 UTC" firstStartedPulling="2026-02-19 05:48:36.178717626 +0000 UTC m=+1412.212040195" lastFinishedPulling="2026-02-19 05:48:40.615069408 +0000 UTC m=+1416.648391977" observedRunningTime="2026-02-19 05:48:41.260776463 +0000 UTC m=+1417.294099032" watchObservedRunningTime="2026-02-19 05:48:41.266370789 +0000 UTC m=+1417.299693358" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.530393 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zcjd2"] Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.535567 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.544022 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcjd2"] Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.703476 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cdlb\" (UniqueName: \"kubernetes.io/projected/934d7854-a117-4051-a05a-034327616c89-kube-api-access-4cdlb\") pod \"redhat-marketplace-zcjd2\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.703726 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-utilities\") pod \"redhat-marketplace-zcjd2\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.703761 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-catalog-content\") pod \"redhat-marketplace-zcjd2\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.808320 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cdlb\" (UniqueName: \"kubernetes.io/projected/934d7854-a117-4051-a05a-034327616c89-kube-api-access-4cdlb\") pod \"redhat-marketplace-zcjd2\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.808424 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-utilities\") pod \"redhat-marketplace-zcjd2\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.808443 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-catalog-content\") pod \"redhat-marketplace-zcjd2\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.808917 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-catalog-content\") pod \"redhat-marketplace-zcjd2\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.809027 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-utilities\") pod \"redhat-marketplace-zcjd2\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.830607 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cdlb\" (UniqueName: \"kubernetes.io/projected/934d7854-a117-4051-a05a-034327616c89-kube-api-access-4cdlb\") pod \"redhat-marketplace-zcjd2\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:43 crc kubenswrapper[5012]: I0219 05:48:43.863370 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:44 crc kubenswrapper[5012]: I0219 05:48:44.422384 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcjd2"] Feb 19 05:48:44 crc kubenswrapper[5012]: I0219 05:48:44.431116 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:48:44 crc kubenswrapper[5012]: I0219 05:48:44.431173 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:48:45 crc kubenswrapper[5012]: I0219 05:48:45.311425 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:45 crc kubenswrapper[5012]: I0219 05:48:45.311781 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:45 crc kubenswrapper[5012]: I0219 05:48:45.326707 5012 generic.go:334] "Generic (PLEG): container finished" podID="934d7854-a117-4051-a05a-034327616c89" containerID="391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9" exitCode=0 Feb 19 05:48:45 crc kubenswrapper[5012]: I0219 05:48:45.326783 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcjd2" event={"ID":"934d7854-a117-4051-a05a-034327616c89","Type":"ContainerDied","Data":"391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9"} Feb 19 05:48:45 crc kubenswrapper[5012]: I0219 05:48:45.326861 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcjd2" event={"ID":"934d7854-a117-4051-a05a-034327616c89","Type":"ContainerStarted","Data":"6945b042fdd5b863d7b689dc7a424c7059487e0b30d92b16e72e614d02f9e037"} Feb 19 05:48:47 crc kubenswrapper[5012]: I0219 05:48:46.337045 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcjd2" event={"ID":"934d7854-a117-4051-a05a-034327616c89","Type":"ContainerStarted","Data":"8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219"} Feb 19 05:48:47 crc kubenswrapper[5012]: I0219 05:48:47.793134 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dm5jf" podUID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerName="registry-server" probeResult="failure" output=< Feb 19 05:48:47 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 05:48:47 crc kubenswrapper[5012]: > Feb 19 05:48:48 crc kubenswrapper[5012]: I0219 05:48:48.360856 5012 generic.go:334] "Generic (PLEG): container finished" podID="934d7854-a117-4051-a05a-034327616c89" containerID="8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219" exitCode=0 Feb 19 05:48:48 crc kubenswrapper[5012]: I0219 05:48:48.360899 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcjd2" event={"ID":"934d7854-a117-4051-a05a-034327616c89","Type":"ContainerDied","Data":"8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219"} Feb 19 05:48:49 crc kubenswrapper[5012]: I0219 05:48:49.374692 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcjd2" event={"ID":"934d7854-a117-4051-a05a-034327616c89","Type":"ContainerStarted","Data":"983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797"} Feb 19 05:48:49 crc kubenswrapper[5012]: I0219 05:48:49.397551 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zcjd2" podStartSLOduration=2.903983208 podStartE2EDuration="6.397528613s" podCreationTimestamp="2026-02-19 05:48:43 +0000 UTC" firstStartedPulling="2026-02-19 05:48:45.329937666 +0000 UTC m=+1421.363260255" lastFinishedPulling="2026-02-19 05:48:48.823483091 +0000 UTC m=+1424.856805660" observedRunningTime="2026-02-19 05:48:49.392716115 +0000 UTC m=+1425.426038694" watchObservedRunningTime="2026-02-19 05:48:49.397528613 +0000 UTC m=+1425.430851202" Feb 19 05:48:53 crc kubenswrapper[5012]: I0219 05:48:53.863822 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:53 crc kubenswrapper[5012]: I0219 05:48:53.864614 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:53 crc kubenswrapper[5012]: I0219 05:48:53.919613 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:54 crc kubenswrapper[5012]: I0219 05:48:54.507516 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:54 crc kubenswrapper[5012]: I0219 05:48:54.573274 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcjd2"] Feb 19 05:48:55 crc kubenswrapper[5012]: I0219 05:48:55.387491 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:55 crc kubenswrapper[5012]: I0219 05:48:55.460949 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:56 crc kubenswrapper[5012]: I0219 05:48:56.455771 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zcjd2" podUID="934d7854-a117-4051-a05a-034327616c89" containerName="registry-server" containerID="cri-o://983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797" gracePeriod=2 Feb 19 05:48:56 crc kubenswrapper[5012]: I0219 05:48:56.576458 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dm5jf"] Feb 19 05:48:56 crc kubenswrapper[5012]: I0219 05:48:56.577147 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dm5jf" podUID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerName="registry-server" containerID="cri-o://77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69" gracePeriod=2 Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.176824 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.320852 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6mlb\" (UniqueName: \"kubernetes.io/projected/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-kube-api-access-q6mlb\") pod \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.321016 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-utilities\") pod \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.321056 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-catalog-content\") pod \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\" (UID: \"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4\") " Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.321962 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-utilities" (OuterVolumeSpecName: "utilities") pod "66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" (UID: "66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.324041 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.325870 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-kube-api-access-q6mlb" (OuterVolumeSpecName: "kube-api-access-q6mlb") pod "66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" (UID: "66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4"). InnerVolumeSpecName "kube-api-access-q6mlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.361067 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.426166 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6mlb\" (UniqueName: \"kubernetes.io/projected/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-kube-api-access-q6mlb\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.466619 5012 generic.go:334] "Generic (PLEG): container finished" podID="934d7854-a117-4051-a05a-034327616c89" containerID="983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797" exitCode=0 Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.466746 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcjd2" event={"ID":"934d7854-a117-4051-a05a-034327616c89","Type":"ContainerDied","Data":"983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797"} Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.466779 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zcjd2" event={"ID":"934d7854-a117-4051-a05a-034327616c89","Type":"ContainerDied","Data":"6945b042fdd5b863d7b689dc7a424c7059487e0b30d92b16e72e614d02f9e037"} Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.466824 5012 scope.go:117] "RemoveContainer" containerID="983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.467043 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zcjd2" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.470986 5012 generic.go:334] "Generic (PLEG): container finished" podID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerID="77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69" exitCode=0 Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.471025 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dm5jf" event={"ID":"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4","Type":"ContainerDied","Data":"77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69"} Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.471129 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dm5jf" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.471156 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dm5jf" event={"ID":"66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4","Type":"ContainerDied","Data":"77be9753ec657fff700d5aa8c08179d08b1e0427037a7558362bc489f12b2bf9"} Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.471389 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" (UID: "66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.493069 5012 scope.go:117] "RemoveContainer" containerID="8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.514265 5012 scope.go:117] "RemoveContainer" containerID="391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.528092 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cdlb\" (UniqueName: \"kubernetes.io/projected/934d7854-a117-4051-a05a-034327616c89-kube-api-access-4cdlb\") pod \"934d7854-a117-4051-a05a-034327616c89\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.528225 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-catalog-content\") pod \"934d7854-a117-4051-a05a-034327616c89\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.528290 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-utilities\") pod \"934d7854-a117-4051-a05a-034327616c89\" (UID: \"934d7854-a117-4051-a05a-034327616c89\") " Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.528865 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.529084 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-utilities" (OuterVolumeSpecName: "utilities") pod "934d7854-a117-4051-a05a-034327616c89" (UID: "934d7854-a117-4051-a05a-034327616c89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.530944 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/934d7854-a117-4051-a05a-034327616c89-kube-api-access-4cdlb" (OuterVolumeSpecName: "kube-api-access-4cdlb") pod "934d7854-a117-4051-a05a-034327616c89" (UID: "934d7854-a117-4051-a05a-034327616c89"). InnerVolumeSpecName "kube-api-access-4cdlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.536649 5012 scope.go:117] "RemoveContainer" containerID="983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797" Feb 19 05:48:57 crc kubenswrapper[5012]: E0219 05:48:57.537119 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797\": container with ID starting with 983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797 not found: ID does not exist" containerID="983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.537168 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797"} err="failed to get container status \"983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797\": rpc error: code = NotFound desc = could not find container \"983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797\": container with ID starting with 983fdd8b2bef58526c41f28d0a7a2e4876d782a01b8a066e29657e6077557797 not found: ID does not exist" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.537196 5012 scope.go:117] "RemoveContainer" containerID="8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219" Feb 19 05:48:57 crc kubenswrapper[5012]: E0219 05:48:57.537663 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219\": container with ID starting with 8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219 not found: ID does not exist" containerID="8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.537736 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219"} err="failed to get container status \"8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219\": rpc error: code = NotFound desc = could not find container \"8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219\": container with ID starting with 8e2b1c1a61c9916799b9af16a326e3b7e23ee39e45069a55b17e4698a5432219 not found: ID does not exist" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.537771 5012 scope.go:117] "RemoveContainer" containerID="391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9" Feb 19 05:48:57 crc kubenswrapper[5012]: E0219 05:48:57.538194 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9\": container with ID starting with 391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9 not found: ID does not exist" containerID="391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.538219 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9"} err="failed to get container status \"391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9\": rpc error: code = NotFound desc = could not find container \"391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9\": container with ID starting with 391428608c8e577a49f113a189ae5fced306cd15db3e1aeb0d1e129ca9194ac9 not found: ID does not exist" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.538237 5012 scope.go:117] "RemoveContainer" containerID="77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.568120 5012 scope.go:117] "RemoveContainer" containerID="4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.575420 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "934d7854-a117-4051-a05a-034327616c89" (UID: "934d7854-a117-4051-a05a-034327616c89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.630470 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cdlb\" (UniqueName: \"kubernetes.io/projected/934d7854-a117-4051-a05a-034327616c89-kube-api-access-4cdlb\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.630510 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.630524 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/934d7854-a117-4051-a05a-034327616c89-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.637866 5012 scope.go:117] "RemoveContainer" containerID="96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.670914 5012 scope.go:117] "RemoveContainer" containerID="77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69" Feb 19 05:48:57 crc kubenswrapper[5012]: E0219 05:48:57.671602 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69\": container with ID starting with 77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69 not found: ID does not exist" containerID="77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.671646 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69"} err="failed to get container status \"77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69\": rpc error: code = NotFound desc = could not find container \"77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69\": container with ID starting with 77b8dbaf02a44726d1041ad099300c0eb116529af947be1cd1dc7600fa46fd69 not found: ID does not exist" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.671670 5012 scope.go:117] "RemoveContainer" containerID="4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc" Feb 19 05:48:57 crc kubenswrapper[5012]: E0219 05:48:57.671966 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc\": container with ID starting with 4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc not found: ID does not exist" containerID="4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.671997 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc"} err="failed to get container status \"4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc\": rpc error: code = NotFound desc = could not find container \"4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc\": container with ID starting with 4a134ef9e6e81253a1b848db44f14686a96a00b5482ac2214a76b079039cf3cc not found: ID does not exist" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.672065 5012 scope.go:117] "RemoveContainer" containerID="96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481" Feb 19 05:48:57 crc kubenswrapper[5012]: E0219 05:48:57.672436 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481\": container with ID starting with 96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481 not found: ID does not exist" containerID="96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.672461 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481"} err="failed to get container status \"96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481\": rpc error: code = NotFound desc = could not find container \"96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481\": container with ID starting with 96111ff7c86843bbebfae0c5eab40fcc4fd5b8b634eabe1f550582ca4935d481 not found: ID does not exist" Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.818988 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcjd2"] Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.833433 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zcjd2"] Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.846378 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dm5jf"] Feb 19 05:48:57 crc kubenswrapper[5012]: I0219 05:48:57.859175 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dm5jf"] Feb 19 05:48:58 crc kubenswrapper[5012]: I0219 05:48:58.722664 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" path="/var/lib/kubelet/pods/66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4/volumes" Feb 19 05:48:58 crc kubenswrapper[5012]: I0219 05:48:58.724413 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="934d7854-a117-4051-a05a-034327616c89" path="/var/lib/kubelet/pods/934d7854-a117-4051-a05a-034327616c89/volumes" Feb 19 05:49:14 crc kubenswrapper[5012]: I0219 05:49:14.430572 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:49:14 crc kubenswrapper[5012]: I0219 05:49:14.431197 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:49:33 crc kubenswrapper[5012]: I0219 05:49:33.188816 5012 scope.go:117] "RemoveContainer" containerID="55079917653f6fec11a6880998a2eb1b86a3b903487d3ecb0aa13cd966d7990e" Feb 19 05:49:33 crc kubenswrapper[5012]: I0219 05:49:33.407437 5012 scope.go:117] "RemoveContainer" containerID="12a292fc1b8e4523fdc0fb30ca3590a1b6b6f0c70c3e42e076f92a7b213241f2" Feb 19 05:49:33 crc kubenswrapper[5012]: I0219 05:49:33.464007 5012 scope.go:117] "RemoveContainer" containerID="3fcdc6a7de1157e87df26c6381be0f82492f8c4422bc5e6ab2f42667c4a696ee" Feb 19 05:49:33 crc kubenswrapper[5012]: I0219 05:49:33.530701 5012 scope.go:117] "RemoveContainer" containerID="e454f72d42b6df4ccbea155823e52fa4dbc71ac17be418579910450da7af968d" Feb 19 05:49:44 crc kubenswrapper[5012]: I0219 05:49:44.430931 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:49:44 crc kubenswrapper[5012]: I0219 05:49:44.431708 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:49:44 crc kubenswrapper[5012]: I0219 05:49:44.431777 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:49:44 crc kubenswrapper[5012]: I0219 05:49:44.433638 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 05:49:44 crc kubenswrapper[5012]: I0219 05:49:44.433772 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" gracePeriod=600 Feb 19 05:49:44 crc kubenswrapper[5012]: E0219 05:49:44.588384 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:49:45 crc kubenswrapper[5012]: I0219 05:49:45.149266 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" exitCode=0 Feb 19 05:49:45 crc kubenswrapper[5012]: I0219 05:49:45.149357 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42"} Feb 19 05:49:45 crc kubenswrapper[5012]: I0219 05:49:45.149649 5012 scope.go:117] "RemoveContainer" containerID="6721017012e745bfd497807b3e0766cbf7c779446215cbbe94491f729f86c6ac" Feb 19 05:49:45 crc kubenswrapper[5012]: I0219 05:49:45.150782 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:49:45 crc kubenswrapper[5012]: E0219 05:49:45.151436 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:49:59 crc kubenswrapper[5012]: I0219 05:49:59.704217 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:49:59 crc kubenswrapper[5012]: E0219 05:49:59.705490 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:50:11 crc kubenswrapper[5012]: I0219 05:50:11.704456 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:50:11 crc kubenswrapper[5012]: E0219 05:50:11.705827 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.889355 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2tcvb"] Feb 19 05:50:18 crc kubenswrapper[5012]: E0219 05:50:18.891558 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934d7854-a117-4051-a05a-034327616c89" containerName="registry-server" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.891600 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="934d7854-a117-4051-a05a-034327616c89" containerName="registry-server" Feb 19 05:50:18 crc kubenswrapper[5012]: E0219 05:50:18.891627 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerName="registry-server" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.891639 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerName="registry-server" Feb 19 05:50:18 crc kubenswrapper[5012]: E0219 05:50:18.891660 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934d7854-a117-4051-a05a-034327616c89" containerName="extract-utilities" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.891674 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="934d7854-a117-4051-a05a-034327616c89" containerName="extract-utilities" Feb 19 05:50:18 crc kubenswrapper[5012]: E0219 05:50:18.891717 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934d7854-a117-4051-a05a-034327616c89" containerName="extract-content" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.891732 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="934d7854-a117-4051-a05a-034327616c89" containerName="extract-content" Feb 19 05:50:18 crc kubenswrapper[5012]: E0219 05:50:18.891796 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerName="extract-utilities" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.891808 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerName="extract-utilities" Feb 19 05:50:18 crc kubenswrapper[5012]: E0219 05:50:18.891824 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerName="extract-content" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.891838 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerName="extract-content" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.892209 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="934d7854-a117-4051-a05a-034327616c89" containerName="registry-server" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.892258 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="66ce2fdc-7b0b-49bb-8f06-fee8eeed05f4" containerName="registry-server" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.894976 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:18 crc kubenswrapper[5012]: I0219 05:50:18.902578 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2tcvb"] Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.045593 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-utilities\") pod \"certified-operators-2tcvb\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.045732 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hbf4\" (UniqueName: \"kubernetes.io/projected/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-kube-api-access-2hbf4\") pod \"certified-operators-2tcvb\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.045977 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-catalog-content\") pod \"certified-operators-2tcvb\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.148210 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-utilities\") pod \"certified-operators-2tcvb\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.148428 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hbf4\" (UniqueName: \"kubernetes.io/projected/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-kube-api-access-2hbf4\") pod \"certified-operators-2tcvb\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.148534 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-catalog-content\") pod \"certified-operators-2tcvb\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.149489 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-catalog-content\") pod \"certified-operators-2tcvb\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.149508 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-utilities\") pod \"certified-operators-2tcvb\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.184067 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hbf4\" (UniqueName: \"kubernetes.io/projected/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-kube-api-access-2hbf4\") pod \"certified-operators-2tcvb\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.255484 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:19 crc kubenswrapper[5012]: I0219 05:50:19.814063 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2tcvb"] Feb 19 05:50:19 crc kubenswrapper[5012]: W0219 05:50:19.816202 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd355dfcc_ca49_41f8_93b7_630cf9a2a20f.slice/crio-9617751c883719b0b88aab3abf97f5bdf679d89e37bc5c25367dcefb091fcb6e WatchSource:0}: Error finding container 9617751c883719b0b88aab3abf97f5bdf679d89e37bc5c25367dcefb091fcb6e: Status 404 returned error can't find the container with id 9617751c883719b0b88aab3abf97f5bdf679d89e37bc5c25367dcefb091fcb6e Feb 19 05:50:20 crc kubenswrapper[5012]: I0219 05:50:20.617184 5012 generic.go:334] "Generic (PLEG): container finished" podID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerID="7d9d854cd627b97bd284e92095af02bb1f0cd8fa5c703cf9288525eb0d73cbb0" exitCode=0 Feb 19 05:50:20 crc kubenswrapper[5012]: I0219 05:50:20.617417 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tcvb" event={"ID":"d355dfcc-ca49-41f8-93b7-630cf9a2a20f","Type":"ContainerDied","Data":"7d9d854cd627b97bd284e92095af02bb1f0cd8fa5c703cf9288525eb0d73cbb0"} Feb 19 05:50:20 crc kubenswrapper[5012]: I0219 05:50:20.617552 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tcvb" event={"ID":"d355dfcc-ca49-41f8-93b7-630cf9a2a20f","Type":"ContainerStarted","Data":"9617751c883719b0b88aab3abf97f5bdf679d89e37bc5c25367dcefb091fcb6e"} Feb 19 05:50:20 crc kubenswrapper[5012]: I0219 05:50:20.622365 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 05:50:21 crc kubenswrapper[5012]: I0219 05:50:21.630381 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tcvb" event={"ID":"d355dfcc-ca49-41f8-93b7-630cf9a2a20f","Type":"ContainerStarted","Data":"5b1ada347824fa800386982818c8efb6c13ea89793c8a09d04b560111d8a1555"} Feb 19 05:50:22 crc kubenswrapper[5012]: I0219 05:50:22.647508 5012 generic.go:334] "Generic (PLEG): container finished" podID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerID="5b1ada347824fa800386982818c8efb6c13ea89793c8a09d04b560111d8a1555" exitCode=0 Feb 19 05:50:22 crc kubenswrapper[5012]: I0219 05:50:22.647611 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tcvb" event={"ID":"d355dfcc-ca49-41f8-93b7-630cf9a2a20f","Type":"ContainerDied","Data":"5b1ada347824fa800386982818c8efb6c13ea89793c8a09d04b560111d8a1555"} Feb 19 05:50:23 crc kubenswrapper[5012]: I0219 05:50:23.663903 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tcvb" event={"ID":"d355dfcc-ca49-41f8-93b7-630cf9a2a20f","Type":"ContainerStarted","Data":"acd498e678b373195185b97d704769b9fd8eb1a02b368a7997d026612e487a3a"} Feb 19 05:50:23 crc kubenswrapper[5012]: I0219 05:50:23.701590 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2tcvb" podStartSLOduration=3.254314919 podStartE2EDuration="5.70155704s" podCreationTimestamp="2026-02-19 05:50:18 +0000 UTC" firstStartedPulling="2026-02-19 05:50:20.621937612 +0000 UTC m=+1516.655260221" lastFinishedPulling="2026-02-19 05:50:23.069179733 +0000 UTC m=+1519.102502342" observedRunningTime="2026-02-19 05:50:23.694763964 +0000 UTC m=+1519.728086573" watchObservedRunningTime="2026-02-19 05:50:23.70155704 +0000 UTC m=+1519.734879649" Feb 19 05:50:25 crc kubenswrapper[5012]: I0219 05:50:25.702762 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:50:25 crc kubenswrapper[5012]: E0219 05:50:25.703268 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:50:29 crc kubenswrapper[5012]: I0219 05:50:29.256375 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:29 crc kubenswrapper[5012]: I0219 05:50:29.257273 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:29 crc kubenswrapper[5012]: I0219 05:50:29.340033 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:29 crc kubenswrapper[5012]: I0219 05:50:29.825268 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:29 crc kubenswrapper[5012]: I0219 05:50:29.893702 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2tcvb"] Feb 19 05:50:31 crc kubenswrapper[5012]: I0219 05:50:31.783045 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2tcvb" podUID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerName="registry-server" containerID="cri-o://acd498e678b373195185b97d704769b9fd8eb1a02b368a7997d026612e487a3a" gracePeriod=2 Feb 19 05:50:32 crc kubenswrapper[5012]: I0219 05:50:32.814178 5012 generic.go:334] "Generic (PLEG): container finished" podID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerID="acd498e678b373195185b97d704769b9fd8eb1a02b368a7997d026612e487a3a" exitCode=0 Feb 19 05:50:32 crc kubenswrapper[5012]: I0219 05:50:32.814553 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tcvb" event={"ID":"d355dfcc-ca49-41f8-93b7-630cf9a2a20f","Type":"ContainerDied","Data":"acd498e678b373195185b97d704769b9fd8eb1a02b368a7997d026612e487a3a"} Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.460714 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.610975 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hbf4\" (UniqueName: \"kubernetes.io/projected/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-kube-api-access-2hbf4\") pod \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.611222 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-catalog-content\") pod \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.611267 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-utilities\") pod \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\" (UID: \"d355dfcc-ca49-41f8-93b7-630cf9a2a20f\") " Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.612330 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-utilities" (OuterVolumeSpecName: "utilities") pod "d355dfcc-ca49-41f8-93b7-630cf9a2a20f" (UID: "d355dfcc-ca49-41f8-93b7-630cf9a2a20f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.619890 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-kube-api-access-2hbf4" (OuterVolumeSpecName: "kube-api-access-2hbf4") pod "d355dfcc-ca49-41f8-93b7-630cf9a2a20f" (UID: "d355dfcc-ca49-41f8-93b7-630cf9a2a20f"). InnerVolumeSpecName "kube-api-access-2hbf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.668165 5012 scope.go:117] "RemoveContainer" containerID="5011a2da1b6766de9dceb07b094e5e5b90457583e5b1d7f21e441d5bc980ef81" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.690990 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d355dfcc-ca49-41f8-93b7-630cf9a2a20f" (UID: "d355dfcc-ca49-41f8-93b7-630cf9a2a20f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.708616 5012 scope.go:117] "RemoveContainer" containerID="8a02fea3b4cd70626ac243cec71c2d7a481574c8f18cffc243a46c68a245c413" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.715238 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hbf4\" (UniqueName: \"kubernetes.io/projected/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-kube-api-access-2hbf4\") on node \"crc\" DevicePath \"\"" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.715286 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.715328 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d355dfcc-ca49-41f8-93b7-630cf9a2a20f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.736268 5012 scope.go:117] "RemoveContainer" containerID="1bf5d73af424c2f421bc54586605dbed2a0980894768360700238dc093ac82ff" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.768876 5012 scope.go:117] "RemoveContainer" containerID="f9417f3089ab939acabaf087bdedc14bb6991a7978946e02fec09196a1d9ec1c" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.792193 5012 scope.go:117] "RemoveContainer" containerID="bdf4b7c244764dd2879106070ed07ec4228686361067f77e4b0e731b44af052c" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.834398 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2tcvb" event={"ID":"d355dfcc-ca49-41f8-93b7-630cf9a2a20f","Type":"ContainerDied","Data":"9617751c883719b0b88aab3abf97f5bdf679d89e37bc5c25367dcefb091fcb6e"} Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.834455 5012 scope.go:117] "RemoveContainer" containerID="acd498e678b373195185b97d704769b9fd8eb1a02b368a7997d026612e487a3a" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.834618 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2tcvb" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.883366 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2tcvb"] Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.893410 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2tcvb"] Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.903332 5012 scope.go:117] "RemoveContainer" containerID="5b1ada347824fa800386982818c8efb6c13ea89793c8a09d04b560111d8a1555" Feb 19 05:50:33 crc kubenswrapper[5012]: I0219 05:50:33.990078 5012 scope.go:117] "RemoveContainer" containerID="7d9d854cd627b97bd284e92095af02bb1f0cd8fa5c703cf9288525eb0d73cbb0" Feb 19 05:50:34 crc kubenswrapper[5012]: I0219 05:50:34.719944 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" path="/var/lib/kubelet/pods/d355dfcc-ca49-41f8-93b7-630cf9a2a20f/volumes" Feb 19 05:50:39 crc kubenswrapper[5012]: I0219 05:50:39.703526 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:50:39 crc kubenswrapper[5012]: E0219 05:50:39.704573 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:50:50 crc kubenswrapper[5012]: I0219 05:50:50.702744 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:50:50 crc kubenswrapper[5012]: E0219 05:50:50.703567 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:51:05 crc kubenswrapper[5012]: I0219 05:51:05.702838 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:51:05 crc kubenswrapper[5012]: E0219 05:51:05.703873 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:51:16 crc kubenswrapper[5012]: E0219 05:51:16.950834 5012 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebf47868_aec9_4f2e_8c08_499161f45b18.slice/crio-29f44073d44e2b9740f67ef79845a00f042a9a8a60fe1de16bde1fbb0612c36e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebf47868_aec9_4f2e_8c08_499161f45b18.slice/crio-conmon-29f44073d44e2b9740f67ef79845a00f042a9a8a60fe1de16bde1fbb0612c36e.scope\": RecentStats: unable to find data in memory cache]" Feb 19 05:51:17 crc kubenswrapper[5012]: I0219 05:51:17.396991 5012 generic.go:334] "Generic (PLEG): container finished" podID="ebf47868-aec9-4f2e-8c08-499161f45b18" containerID="29f44073d44e2b9740f67ef79845a00f042a9a8a60fe1de16bde1fbb0612c36e" exitCode=0 Feb 19 05:51:17 crc kubenswrapper[5012]: I0219 05:51:17.397140 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" event={"ID":"ebf47868-aec9-4f2e-8c08-499161f45b18","Type":"ContainerDied","Data":"29f44073d44e2b9740f67ef79845a00f042a9a8a60fe1de16bde1fbb0612c36e"} Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.033512 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.112260 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v85qw\" (UniqueName: \"kubernetes.io/projected/ebf47868-aec9-4f2e-8c08-499161f45b18-kube-api-access-v85qw\") pod \"ebf47868-aec9-4f2e-8c08-499161f45b18\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.112376 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-bootstrap-combined-ca-bundle\") pod \"ebf47868-aec9-4f2e-8c08-499161f45b18\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.112530 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-inventory\") pod \"ebf47868-aec9-4f2e-8c08-499161f45b18\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.112631 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-ssh-key-openstack-edpm-ipam\") pod \"ebf47868-aec9-4f2e-8c08-499161f45b18\" (UID: \"ebf47868-aec9-4f2e-8c08-499161f45b18\") " Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.119980 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebf47868-aec9-4f2e-8c08-499161f45b18-kube-api-access-v85qw" (OuterVolumeSpecName: "kube-api-access-v85qw") pod "ebf47868-aec9-4f2e-8c08-499161f45b18" (UID: "ebf47868-aec9-4f2e-8c08-499161f45b18"). InnerVolumeSpecName "kube-api-access-v85qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.120364 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ebf47868-aec9-4f2e-8c08-499161f45b18" (UID: "ebf47868-aec9-4f2e-8c08-499161f45b18"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.153319 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-inventory" (OuterVolumeSpecName: "inventory") pod "ebf47868-aec9-4f2e-8c08-499161f45b18" (UID: "ebf47868-aec9-4f2e-8c08-499161f45b18"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.154688 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ebf47868-aec9-4f2e-8c08-499161f45b18" (UID: "ebf47868-aec9-4f2e-8c08-499161f45b18"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.216585 5012 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.216626 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.216639 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebf47868-aec9-4f2e-8c08-499161f45b18-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.216650 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v85qw\" (UniqueName: \"kubernetes.io/projected/ebf47868-aec9-4f2e-8c08-499161f45b18-kube-api-access-v85qw\") on node \"crc\" DevicePath \"\"" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.429629 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" event={"ID":"ebf47868-aec9-4f2e-8c08-499161f45b18","Type":"ContainerDied","Data":"47a7d8825e1acf3d49de6e07e3e26af34d28c13a97cb0ebcbb15be03c70da6f3"} Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.429691 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47a7d8825e1acf3d49de6e07e3e26af34d28c13a97cb0ebcbb15be03c70da6f3" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.429781 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.572513 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r"] Feb 19 05:51:19 crc kubenswrapper[5012]: E0219 05:51:19.573178 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf47868-aec9-4f2e-8c08-499161f45b18" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.573211 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf47868-aec9-4f2e-8c08-499161f45b18" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 05:51:19 crc kubenswrapper[5012]: E0219 05:51:19.573237 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerName="registry-server" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.573249 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerName="registry-server" Feb 19 05:51:19 crc kubenswrapper[5012]: E0219 05:51:19.573328 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerName="extract-utilities" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.573347 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerName="extract-utilities" Feb 19 05:51:19 crc kubenswrapper[5012]: E0219 05:51:19.573378 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerName="extract-content" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.573391 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerName="extract-content" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.573757 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d355dfcc-ca49-41f8-93b7-630cf9a2a20f" containerName="registry-server" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.573786 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf47868-aec9-4f2e-8c08-499161f45b18" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.575015 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.577197 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.578168 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.581662 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.581865 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.585704 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r"] Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.703721 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:51:19 crc kubenswrapper[5012]: E0219 05:51:19.704515 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.728727 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l597r\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.729004 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l597r\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.729071 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwn45\" (UniqueName: \"kubernetes.io/projected/02358307-dba6-44fa-9799-2440b1496c55-kube-api-access-nwn45\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l597r\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.832041 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l597r\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.832126 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwn45\" (UniqueName: \"kubernetes.io/projected/02358307-dba6-44fa-9799-2440b1496c55-kube-api-access-nwn45\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l597r\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.832175 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l597r\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.837930 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l597r\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.839952 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l597r\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:19 crc kubenswrapper[5012]: I0219 05:51:19.914924 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwn45\" (UniqueName: \"kubernetes.io/projected/02358307-dba6-44fa-9799-2440b1496c55-kube-api-access-nwn45\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-l597r\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:20 crc kubenswrapper[5012]: I0219 05:51:20.207764 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:51:20 crc kubenswrapper[5012]: I0219 05:51:20.832882 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r"] Feb 19 05:51:21 crc kubenswrapper[5012]: I0219 05:51:21.459006 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" event={"ID":"02358307-dba6-44fa-9799-2440b1496c55","Type":"ContainerStarted","Data":"9b8b04aec33851631b41f126ebee440f562995d257d9a25d785090f3aa327c69"} Feb 19 05:51:22 crc kubenswrapper[5012]: I0219 05:51:22.480453 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" event={"ID":"02358307-dba6-44fa-9799-2440b1496c55","Type":"ContainerStarted","Data":"eb9cec4bc6acba41d4aa46b7041e6a9f81869527c2b21dfb77744606447bcbcd"} Feb 19 05:51:22 crc kubenswrapper[5012]: I0219 05:51:22.513148 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" podStartSLOduration=3.011121993 podStartE2EDuration="3.513114817s" podCreationTimestamp="2026-02-19 05:51:19 +0000 UTC" firstStartedPulling="2026-02-19 05:51:20.834890416 +0000 UTC m=+1576.868213025" lastFinishedPulling="2026-02-19 05:51:21.33688327 +0000 UTC m=+1577.370205849" observedRunningTime="2026-02-19 05:51:22.50505684 +0000 UTC m=+1578.538379419" watchObservedRunningTime="2026-02-19 05:51:22.513114817 +0000 UTC m=+1578.546437426" Feb 19 05:51:30 crc kubenswrapper[5012]: I0219 05:51:30.703377 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:51:30 crc kubenswrapper[5012]: E0219 05:51:30.704884 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:51:33 crc kubenswrapper[5012]: I0219 05:51:33.948916 5012 scope.go:117] "RemoveContainer" containerID="694cb7239194668fdd96877662e230d283d111646e3e233d72ff54fa322e04ce" Feb 19 05:51:33 crc kubenswrapper[5012]: I0219 05:51:33.984262 5012 scope.go:117] "RemoveContainer" containerID="34a399338c013b61152c60fcd0046303ede4ee51c443dfcf2a65805c9c44defe" Feb 19 05:51:34 crc kubenswrapper[5012]: I0219 05:51:34.017103 5012 scope.go:117] "RemoveContainer" containerID="4b17f7e35bacf75c95fd5af2ce831c9268ee336939f6e0582d263b98f40338b3" Feb 19 05:51:34 crc kubenswrapper[5012]: I0219 05:51:34.053429 5012 scope.go:117] "RemoveContainer" containerID="cb200dd76cd661f7ff34b71bfb488f08698c2c8969d0994a64b2d1b69bb789ec" Feb 19 05:51:44 crc kubenswrapper[5012]: I0219 05:51:44.723196 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:51:44 crc kubenswrapper[5012]: E0219 05:51:44.724283 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:51:59 crc kubenswrapper[5012]: I0219 05:51:59.704711 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:51:59 crc kubenswrapper[5012]: E0219 05:51:59.706049 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:52:03 crc kubenswrapper[5012]: I0219 05:52:03.082371 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-r8ddf"] Feb 19 05:52:03 crc kubenswrapper[5012]: I0219 05:52:03.096670 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-91bd-account-create-update-54r7l"] Feb 19 05:52:03 crc kubenswrapper[5012]: I0219 05:52:03.107361 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-jktc7"] Feb 19 05:52:03 crc kubenswrapper[5012]: I0219 05:52:03.124743 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-hthfx"] Feb 19 05:52:03 crc kubenswrapper[5012]: I0219 05:52:03.133444 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-22e2-account-create-update-vddht"] Feb 19 05:52:03 crc kubenswrapper[5012]: I0219 05:52:03.140889 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-r8ddf"] Feb 19 05:52:03 crc kubenswrapper[5012]: I0219 05:52:03.148598 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-91bd-account-create-update-54r7l"] Feb 19 05:52:03 crc kubenswrapper[5012]: I0219 05:52:03.156349 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-22e2-account-create-update-vddht"] Feb 19 05:52:03 crc kubenswrapper[5012]: I0219 05:52:03.164195 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-jktc7"] Feb 19 05:52:03 crc kubenswrapper[5012]: I0219 05:52:03.171802 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-hthfx"] Feb 19 05:52:04 crc kubenswrapper[5012]: I0219 05:52:04.720919 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12f3008a-413a-4fe7-b3c1-773c10b6b2bf" path="/var/lib/kubelet/pods/12f3008a-413a-4fe7-b3c1-773c10b6b2bf/volumes" Feb 19 05:52:04 crc kubenswrapper[5012]: I0219 05:52:04.722637 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a" path="/var/lib/kubelet/pods/6f5d1fc5-7a37-4ed2-86d6-7e0689c7b65a/volumes" Feb 19 05:52:04 crc kubenswrapper[5012]: I0219 05:52:04.725214 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90a75d3b-186a-41d6-92a8-94729c520aa5" path="/var/lib/kubelet/pods/90a75d3b-186a-41d6-92a8-94729c520aa5/volumes" Feb 19 05:52:04 crc kubenswrapper[5012]: I0219 05:52:04.726975 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1e7d95a-d78a-4d54-a66b-565114b4823e" path="/var/lib/kubelet/pods/d1e7d95a-d78a-4d54-a66b-565114b4823e/volumes" Feb 19 05:52:04 crc kubenswrapper[5012]: I0219 05:52:04.729455 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1e3020d-901d-4649-9e94-c5c0a4cc523d" path="/var/lib/kubelet/pods/e1e3020d-901d-4649-9e94-c5c0a4cc523d/volumes" Feb 19 05:52:06 crc kubenswrapper[5012]: I0219 05:52:06.033266 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-vjzm9"] Feb 19 05:52:06 crc kubenswrapper[5012]: I0219 05:52:06.050991 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-vjzm9"] Feb 19 05:52:06 crc kubenswrapper[5012]: I0219 05:52:06.065362 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-34a7-account-create-update-84f2g"] Feb 19 05:52:06 crc kubenswrapper[5012]: I0219 05:52:06.076896 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b5f0-account-create-update-l7b8m"] Feb 19 05:52:06 crc kubenswrapper[5012]: I0219 05:52:06.085349 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-34a7-account-create-update-84f2g"] Feb 19 05:52:06 crc kubenswrapper[5012]: I0219 05:52:06.092479 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b5f0-account-create-update-l7b8m"] Feb 19 05:52:06 crc kubenswrapper[5012]: I0219 05:52:06.726546 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="533d4699-332c-4ceb-ad6e-77c680699214" path="/var/lib/kubelet/pods/533d4699-332c-4ceb-ad6e-77c680699214/volumes" Feb 19 05:52:06 crc kubenswrapper[5012]: I0219 05:52:06.727288 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e45e098-f689-4015-9871-5f66e5d7bef1" path="/var/lib/kubelet/pods/6e45e098-f689-4015-9871-5f66e5d7bef1/volumes" Feb 19 05:52:06 crc kubenswrapper[5012]: I0219 05:52:06.728340 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a973520b-997d-4c23-a056-590c96123e43" path="/var/lib/kubelet/pods/a973520b-997d-4c23-a056-590c96123e43/volumes" Feb 19 05:52:10 crc kubenswrapper[5012]: I0219 05:52:10.703941 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:52:10 crc kubenswrapper[5012]: E0219 05:52:10.705166 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:52:17 crc kubenswrapper[5012]: I0219 05:52:17.044221 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lj2kq"] Feb 19 05:52:17 crc kubenswrapper[5012]: I0219 05:52:17.062370 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lj2kq"] Feb 19 05:52:18 crc kubenswrapper[5012]: I0219 05:52:18.713910 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c559b49-5b5e-435d-9a6a-66dd1d3cbc79" path="/var/lib/kubelet/pods/3c559b49-5b5e-435d-9a6a-66dd1d3cbc79/volumes" Feb 19 05:52:25 crc kubenswrapper[5012]: I0219 05:52:25.702929 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:52:25 crc kubenswrapper[5012]: E0219 05:52:25.703999 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:52:34 crc kubenswrapper[5012]: I0219 05:52:34.156843 5012 scope.go:117] "RemoveContainer" containerID="0ba4832ef5cde65c22a33ecfff620cd13c71e947e2063a45381a8045e3407918" Feb 19 05:52:34 crc kubenswrapper[5012]: I0219 05:52:34.199528 5012 scope.go:117] "RemoveContainer" containerID="8d4101d8165775d3c785f3ad562d7ef71806f55866410c4f9e87581c5430851f" Feb 19 05:52:34 crc kubenswrapper[5012]: I0219 05:52:34.279638 5012 scope.go:117] "RemoveContainer" containerID="afdc318ce7e7f31c55b83d198c0056a9143debe76f4068e0b8b55a3cd789f800" Feb 19 05:52:34 crc kubenswrapper[5012]: I0219 05:52:34.322212 5012 scope.go:117] "RemoveContainer" containerID="c98bff27bc9812d723f9217b691c091425289e0f299460c4c4e1c7163b359d43" Feb 19 05:52:34 crc kubenswrapper[5012]: I0219 05:52:34.343718 5012 scope.go:117] "RemoveContainer" containerID="65e190912c6d7142d01553a587f58e32095a3f893daa4d06beb98e431777939c" Feb 19 05:52:34 crc kubenswrapper[5012]: I0219 05:52:34.381956 5012 scope.go:117] "RemoveContainer" containerID="573a87d5e8e95277642af154eba731e6d506fbe9be8db1436f41349ffe7bcbd4" Feb 19 05:52:34 crc kubenswrapper[5012]: I0219 05:52:34.435743 5012 scope.go:117] "RemoveContainer" containerID="93e7f5c5600e832347781d221af700104ca8f39c9c057fb3a233ce4702cf409c" Feb 19 05:52:34 crc kubenswrapper[5012]: I0219 05:52:34.470882 5012 scope.go:117] "RemoveContainer" containerID="6cb45a4049590e4fb7d60e94e092be98bdb1a162fc286f8a8013620e8c330260" Feb 19 05:52:34 crc kubenswrapper[5012]: I0219 05:52:34.494886 5012 scope.go:117] "RemoveContainer" containerID="0da1732600a370cfbfe77664995408f2ab300c5ef7fcf22ab0fd4f379bf54473" Feb 19 05:52:34 crc kubenswrapper[5012]: I0219 05:52:34.514352 5012 scope.go:117] "RemoveContainer" containerID="e1dc1ea6e87e48e7096bcfb12892dc9ac8929ba2984948549033f17095a5c4d5" Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.071931 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8f98-account-create-update-7gqc9"] Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.090583 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4vdtn"] Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.101698 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8f98-account-create-update-7gqc9"] Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.112635 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-9pk56"] Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.120170 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6b89-account-create-update-65d6l"] Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.127567 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4vdtn"] Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.134512 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6b89-account-create-update-65d6l"] Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.141654 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-9pk56"] Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.724209 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f81d2f2-d61b-49e6-bd6a-f466da52df74" path="/var/lib/kubelet/pods/0f81d2f2-d61b-49e6-bd6a-f466da52df74/volumes" Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.726090 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d452976-060b-4c25-9dd0-ffed69bb4d84" path="/var/lib/kubelet/pods/5d452976-060b-4c25-9dd0-ffed69bb4d84/volumes" Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.727484 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4bd4c60-a255-42cf-8dd0-913737e4b189" path="/var/lib/kubelet/pods/a4bd4c60-a255-42cf-8dd0-913737e4b189/volumes" Feb 19 05:52:36 crc kubenswrapper[5012]: I0219 05:52:36.729057 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff889c32-0dda-4734-a907-54f4a53e649f" path="/var/lib/kubelet/pods/ff889c32-0dda-4734-a907-54f4a53e649f/volumes" Feb 19 05:52:40 crc kubenswrapper[5012]: I0219 05:52:40.050777 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-24p82"] Feb 19 05:52:40 crc kubenswrapper[5012]: I0219 05:52:40.063667 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-24p82"] Feb 19 05:52:40 crc kubenswrapper[5012]: I0219 05:52:40.703259 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:52:40 crc kubenswrapper[5012]: E0219 05:52:40.704081 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:52:40 crc kubenswrapper[5012]: I0219 05:52:40.725436 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d56d90-ce06-4de3-9edb-2092780e9afe" path="/var/lib/kubelet/pods/31d56d90-ce06-4de3-9edb-2092780e9afe/volumes" Feb 19 05:52:46 crc kubenswrapper[5012]: I0219 05:52:46.058513 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-x7kz5"] Feb 19 05:52:46 crc kubenswrapper[5012]: I0219 05:52:46.074359 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-x7kz5"] Feb 19 05:52:46 crc kubenswrapper[5012]: I0219 05:52:46.729531 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13b820bd-7677-4b9c-a16f-987e22a71876" path="/var/lib/kubelet/pods/13b820bd-7677-4b9c-a16f-987e22a71876/volumes" Feb 19 05:52:53 crc kubenswrapper[5012]: I0219 05:52:53.703445 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:52:53 crc kubenswrapper[5012]: E0219 05:52:53.705116 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:53:06 crc kubenswrapper[5012]: I0219 05:53:06.704156 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:53:06 crc kubenswrapper[5012]: E0219 05:53:06.707102 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:53:08 crc kubenswrapper[5012]: I0219 05:53:08.045872 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-gfhmj"] Feb 19 05:53:08 crc kubenswrapper[5012]: I0219 05:53:08.059009 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c723-account-create-update-n6sg9"] Feb 19 05:53:08 crc kubenswrapper[5012]: I0219 05:53:08.067095 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-gfhmj"] Feb 19 05:53:08 crc kubenswrapper[5012]: I0219 05:53:08.074620 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c723-account-create-update-n6sg9"] Feb 19 05:53:08 crc kubenswrapper[5012]: I0219 05:53:08.723445 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c63064a-a5f1-48da-b11c-eb76b04e3397" path="/var/lib/kubelet/pods/8c63064a-a5f1-48da-b11c-eb76b04e3397/volumes" Feb 19 05:53:08 crc kubenswrapper[5012]: I0219 05:53:08.724694 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd86f802-eef3-479a-870a-e34e7ce028ba" path="/var/lib/kubelet/pods/cd86f802-eef3-479a-870a-e34e7ce028ba/volumes" Feb 19 05:53:19 crc kubenswrapper[5012]: I0219 05:53:19.702912 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:53:19 crc kubenswrapper[5012]: E0219 05:53:19.703861 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:53:29 crc kubenswrapper[5012]: I0219 05:53:29.067827 5012 generic.go:334] "Generic (PLEG): container finished" podID="02358307-dba6-44fa-9799-2440b1496c55" containerID="eb9cec4bc6acba41d4aa46b7041e6a9f81869527c2b21dfb77744606447bcbcd" exitCode=0 Feb 19 05:53:29 crc kubenswrapper[5012]: I0219 05:53:29.067973 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" event={"ID":"02358307-dba6-44fa-9799-2440b1496c55","Type":"ContainerDied","Data":"eb9cec4bc6acba41d4aa46b7041e6a9f81869527c2b21dfb77744606447bcbcd"} Feb 19 05:53:30 crc kubenswrapper[5012]: I0219 05:53:30.530180 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:53:30 crc kubenswrapper[5012]: I0219 05:53:30.703670 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwn45\" (UniqueName: \"kubernetes.io/projected/02358307-dba6-44fa-9799-2440b1496c55-kube-api-access-nwn45\") pod \"02358307-dba6-44fa-9799-2440b1496c55\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " Feb 19 05:53:30 crc kubenswrapper[5012]: I0219 05:53:30.704599 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-ssh-key-openstack-edpm-ipam\") pod \"02358307-dba6-44fa-9799-2440b1496c55\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " Feb 19 05:53:30 crc kubenswrapper[5012]: I0219 05:53:30.704920 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-inventory\") pod \"02358307-dba6-44fa-9799-2440b1496c55\" (UID: \"02358307-dba6-44fa-9799-2440b1496c55\") " Feb 19 05:53:30 crc kubenswrapper[5012]: I0219 05:53:30.716701 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02358307-dba6-44fa-9799-2440b1496c55-kube-api-access-nwn45" (OuterVolumeSpecName: "kube-api-access-nwn45") pod "02358307-dba6-44fa-9799-2440b1496c55" (UID: "02358307-dba6-44fa-9799-2440b1496c55"). InnerVolumeSpecName "kube-api-access-nwn45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:53:30 crc kubenswrapper[5012]: I0219 05:53:30.746016 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-inventory" (OuterVolumeSpecName: "inventory") pod "02358307-dba6-44fa-9799-2440b1496c55" (UID: "02358307-dba6-44fa-9799-2440b1496c55"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:53:30 crc kubenswrapper[5012]: I0219 05:53:30.753590 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "02358307-dba6-44fa-9799-2440b1496c55" (UID: "02358307-dba6-44fa-9799-2440b1496c55"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:53:30 crc kubenswrapper[5012]: I0219 05:53:30.808800 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwn45\" (UniqueName: \"kubernetes.io/projected/02358307-dba6-44fa-9799-2440b1496c55-kube-api-access-nwn45\") on node \"crc\" DevicePath \"\"" Feb 19 05:53:30 crc kubenswrapper[5012]: I0219 05:53:30.808845 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:53:30 crc kubenswrapper[5012]: I0219 05:53:30.808860 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/02358307-dba6-44fa-9799-2440b1496c55-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.093080 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" event={"ID":"02358307-dba6-44fa-9799-2440b1496c55","Type":"ContainerDied","Data":"9b8b04aec33851631b41f126ebee440f562995d257d9a25d785090f3aa327c69"} Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.093143 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b8b04aec33851631b41f126ebee440f562995d257d9a25d785090f3aa327c69" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.093223 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-l597r" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.241879 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74"] Feb 19 05:53:31 crc kubenswrapper[5012]: E0219 05:53:31.242778 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02358307-dba6-44fa-9799-2440b1496c55" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.242812 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="02358307-dba6-44fa-9799-2440b1496c55" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.243228 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="02358307-dba6-44fa-9799-2440b1496c55" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.244549 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.254703 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74"] Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.259963 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.260389 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.260649 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.264656 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.421380 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8sh74\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.421658 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7r84\" (UniqueName: \"kubernetes.io/projected/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-kube-api-access-d7r84\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8sh74\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.421811 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8sh74\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.524782 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7r84\" (UniqueName: \"kubernetes.io/projected/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-kube-api-access-d7r84\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8sh74\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.524996 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8sh74\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.525069 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8sh74\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.533633 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8sh74\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.538363 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8sh74\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.551074 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7r84\" (UniqueName: \"kubernetes.io/projected/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-kube-api-access-d7r84\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8sh74\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:31 crc kubenswrapper[5012]: I0219 05:53:31.571209 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:53:32 crc kubenswrapper[5012]: I0219 05:53:32.264331 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74"] Feb 19 05:53:33 crc kubenswrapper[5012]: I0219 05:53:33.120474 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" event={"ID":"a37d4335-7c06-4fa3-af51-6cfe6fb9a020","Type":"ContainerStarted","Data":"85c3a632b87baefd0f4635bc2948e782ad63514e78084b1f3a6e81fb5d16f7ff"} Feb 19 05:53:34 crc kubenswrapper[5012]: I0219 05:53:34.131898 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" event={"ID":"a37d4335-7c06-4fa3-af51-6cfe6fb9a020","Type":"ContainerStarted","Data":"298f5bb339ba7ac183681cae0f465a88f8842b79b87df2d108cd2a29aab059a2"} Feb 19 05:53:34 crc kubenswrapper[5012]: I0219 05:53:34.712246 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:53:34 crc kubenswrapper[5012]: E0219 05:53:34.712750 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:53:34 crc kubenswrapper[5012]: I0219 05:53:34.777079 5012 scope.go:117] "RemoveContainer" containerID="cfe7e53a61fb5256f22c4a39c4ac5b0bf7cc2f1ccf28f2709694c6b3715b8d0c" Feb 19 05:53:34 crc kubenswrapper[5012]: I0219 05:53:34.831485 5012 scope.go:117] "RemoveContainer" containerID="7e6d7c6e4279d09faf69cc8325c3a9419e59f879f7e638bdadc3e1a99dfe010e" Feb 19 05:53:34 crc kubenswrapper[5012]: I0219 05:53:34.896726 5012 scope.go:117] "RemoveContainer" containerID="cea9e8e15e555d9e359bdb9e094582010c0f5cb2424bf6d21370cbb196b19806" Feb 19 05:53:34 crc kubenswrapper[5012]: I0219 05:53:34.959178 5012 scope.go:117] "RemoveContainer" containerID="a20a059012a07fc06fff87153b7822f281e937cfbfdfbad5c4e4671c1d2bfb30" Feb 19 05:53:35 crc kubenswrapper[5012]: I0219 05:53:35.025082 5012 scope.go:117] "RemoveContainer" containerID="152353fb3f9bf0d9255bd600198a1803f9e2b42292b1e50815808d78b63cdb99" Feb 19 05:53:35 crc kubenswrapper[5012]: I0219 05:53:35.063322 5012 scope.go:117] "RemoveContainer" containerID="20962d8cd5b490b4c52f0881b3105ca6e34c9e56c96152f389a414e4e6b49d12" Feb 19 05:53:35 crc kubenswrapper[5012]: I0219 05:53:35.113014 5012 scope.go:117] "RemoveContainer" containerID="bc1e75b8122059977fabe9b750a293942be5f1e6a7daf5e75f1e50d40f43dd63" Feb 19 05:53:35 crc kubenswrapper[5012]: I0219 05:53:35.160380 5012 scope.go:117] "RemoveContainer" containerID="67bce0df8bf4cde6aebe2e02939680ed6fbf6f5f67dfee6a477ff8a83ddd570c" Feb 19 05:53:36 crc kubenswrapper[5012]: I0219 05:53:36.039520 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" podStartSLOduration=4.328510392 podStartE2EDuration="5.039498695s" podCreationTimestamp="2026-02-19 05:53:31 +0000 UTC" firstStartedPulling="2026-02-19 05:53:32.26685462 +0000 UTC m=+1708.300177229" lastFinishedPulling="2026-02-19 05:53:32.977842923 +0000 UTC m=+1709.011165532" observedRunningTime="2026-02-19 05:53:34.150916863 +0000 UTC m=+1710.184239432" watchObservedRunningTime="2026-02-19 05:53:36.039498695 +0000 UTC m=+1712.072821254" Feb 19 05:53:36 crc kubenswrapper[5012]: I0219 05:53:36.042502 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-w9g6v"] Feb 19 05:53:36 crc kubenswrapper[5012]: I0219 05:53:36.051235 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zf89d"] Feb 19 05:53:36 crc kubenswrapper[5012]: I0219 05:53:36.060838 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-w9g6v"] Feb 19 05:53:36 crc kubenswrapper[5012]: I0219 05:53:36.076619 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zf89d"] Feb 19 05:53:36 crc kubenswrapper[5012]: I0219 05:53:36.721547 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="555a6373-5cdf-490e-b6ea-b0fb55425d28" path="/var/lib/kubelet/pods/555a6373-5cdf-490e-b6ea-b0fb55425d28/volumes" Feb 19 05:53:36 crc kubenswrapper[5012]: I0219 05:53:36.722521 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be803869-4625-418d-bd39-bdbb4e6e0bfd" path="/var/lib/kubelet/pods/be803869-4625-418d-bd39-bdbb4e6e0bfd/volumes" Feb 19 05:53:48 crc kubenswrapper[5012]: I0219 05:53:48.703965 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:53:48 crc kubenswrapper[5012]: E0219 05:53:48.705034 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:53:51 crc kubenswrapper[5012]: I0219 05:53:51.045257 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-cdj57"] Feb 19 05:53:51 crc kubenswrapper[5012]: I0219 05:53:51.074849 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-cdj57"] Feb 19 05:53:52 crc kubenswrapper[5012]: I0219 05:53:52.043715 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-jzclm"] Feb 19 05:53:52 crc kubenswrapper[5012]: I0219 05:53:52.056491 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-jzclm"] Feb 19 05:53:52 crc kubenswrapper[5012]: I0219 05:53:52.715547 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89f14c4e-147e-4a05-a8d9-63b93aaad4a4" path="/var/lib/kubelet/pods/89f14c4e-147e-4a05-a8d9-63b93aaad4a4/volumes" Feb 19 05:53:52 crc kubenswrapper[5012]: I0219 05:53:52.716372 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a34a979c-9102-471f-9678-048fd5198cb8" path="/var/lib/kubelet/pods/a34a979c-9102-471f-9678-048fd5198cb8/volumes" Feb 19 05:53:56 crc kubenswrapper[5012]: I0219 05:53:56.041499 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-xj7dw"] Feb 19 05:53:56 crc kubenswrapper[5012]: I0219 05:53:56.053436 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-xj7dw"] Feb 19 05:53:56 crc kubenswrapper[5012]: I0219 05:53:56.726694 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b98c972c-b350-44a1-a7c5-028914fe7bfc" path="/var/lib/kubelet/pods/b98c972c-b350-44a1-a7c5-028914fe7bfc/volumes" Feb 19 05:54:02 crc kubenswrapper[5012]: I0219 05:54:02.703635 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:54:02 crc kubenswrapper[5012]: E0219 05:54:02.704315 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:54:04 crc kubenswrapper[5012]: I0219 05:54:04.060973 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-px7xk"] Feb 19 05:54:04 crc kubenswrapper[5012]: I0219 05:54:04.071779 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-px7xk"] Feb 19 05:54:04 crc kubenswrapper[5012]: I0219 05:54:04.724911 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="787f8a71-dee4-40d2-b33b-85bcfc58f921" path="/var/lib/kubelet/pods/787f8a71-dee4-40d2-b33b-85bcfc58f921/volumes" Feb 19 05:54:14 crc kubenswrapper[5012]: I0219 05:54:14.710517 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:54:14 crc kubenswrapper[5012]: E0219 05:54:14.711418 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:54:25 crc kubenswrapper[5012]: I0219 05:54:25.702710 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:54:25 crc kubenswrapper[5012]: E0219 05:54:25.703380 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:54:35 crc kubenswrapper[5012]: I0219 05:54:35.369596 5012 scope.go:117] "RemoveContainer" containerID="8dfd0224f4b707b6bfc0133d1f07ea378c585adcdbe5ef8ea62dd0f00fb98923" Feb 19 05:54:35 crc kubenswrapper[5012]: I0219 05:54:35.429524 5012 scope.go:117] "RemoveContainer" containerID="8322bcc6cc3c5b2d8222ae8137e7a8ab0b73bac7b8fa9b87cd91c71100844e13" Feb 19 05:54:35 crc kubenswrapper[5012]: I0219 05:54:35.486390 5012 scope.go:117] "RemoveContainer" containerID="8659190e8633f7b88664c6c7e44927faf89d76ab66a53b4530e433a52d8c9664" Feb 19 05:54:35 crc kubenswrapper[5012]: I0219 05:54:35.538612 5012 scope.go:117] "RemoveContainer" containerID="d0e335ec457cf8c772f55111337cf2d1aae49da15e75b237650c2e4a19efd926" Feb 19 05:54:35 crc kubenswrapper[5012]: I0219 05:54:35.592160 5012 scope.go:117] "RemoveContainer" containerID="c3b30cfc4d7788c5bf2800aec00271d7a398ee5903276843825107c74fa7f5b9" Feb 19 05:54:35 crc kubenswrapper[5012]: I0219 05:54:35.648161 5012 scope.go:117] "RemoveContainer" containerID="9a5f9edac057b3de1965c26aac0927e9eaced35943e1b07d9b0176cc162f7fc5" Feb 19 05:54:38 crc kubenswrapper[5012]: I0219 05:54:38.704288 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:54:38 crc kubenswrapper[5012]: E0219 05:54:38.705032 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 05:54:43 crc kubenswrapper[5012]: I0219 05:54:43.065399 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-b3d3-account-create-update-jv5jh"] Feb 19 05:54:43 crc kubenswrapper[5012]: I0219 05:54:43.081139 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-b3d3-account-create-update-jv5jh"] Feb 19 05:54:43 crc kubenswrapper[5012]: I0219 05:54:43.092809 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-fc68-account-create-update-tfrzr"] Feb 19 05:54:43 crc kubenswrapper[5012]: I0219 05:54:43.100247 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-fc68-account-create-update-tfrzr"] Feb 19 05:54:43 crc kubenswrapper[5012]: I0219 05:54:43.958279 5012 generic.go:334] "Generic (PLEG): container finished" podID="a37d4335-7c06-4fa3-af51-6cfe6fb9a020" containerID="298f5bb339ba7ac183681cae0f465a88f8842b79b87df2d108cd2a29aab059a2" exitCode=0 Feb 19 05:54:43 crc kubenswrapper[5012]: I0219 05:54:43.958510 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" event={"ID":"a37d4335-7c06-4fa3-af51-6cfe6fb9a020","Type":"ContainerDied","Data":"298f5bb339ba7ac183681cae0f465a88f8842b79b87df2d108cd2a29aab059a2"} Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.055525 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-vj27c"] Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.067102 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-9gsgt"] Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.079106 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-vj27c"] Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.088581 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a4a6-account-create-update-tz4l9"] Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.095428 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-j7vgh"] Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.101998 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-j7vgh"] Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.108536 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-9gsgt"] Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.115161 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a4a6-account-create-update-tz4l9"] Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.724754 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b1a4d80-a736-41c3-9157-c0a696c10eff" path="/var/lib/kubelet/pods/0b1a4d80-a736-41c3-9157-c0a696c10eff/volumes" Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.725580 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fc398d7-f426-420d-981c-6bda415a2ce0" path="/var/lib/kubelet/pods/2fc398d7-f426-420d-981c-6bda415a2ce0/volumes" Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.726299 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="768cc9af-66f9-4972-a2b4-a69b0fb15b3d" path="/var/lib/kubelet/pods/768cc9af-66f9-4972-a2b4-a69b0fb15b3d/volumes" Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.727091 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80e98ac0-3018-4566-95b3-2d2dfd3e234e" path="/var/lib/kubelet/pods/80e98ac0-3018-4566-95b3-2d2dfd3e234e/volumes" Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.728474 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd4d5a16-81ab-4336-99d5-570d83e4baaa" path="/var/lib/kubelet/pods/cd4d5a16-81ab-4336-99d5-570d83e4baaa/volumes" Feb 19 05:54:44 crc kubenswrapper[5012]: I0219 05:54:44.729202 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efae98df-8f23-4e6b-bad0-f2c7a58fb86d" path="/var/lib/kubelet/pods/efae98df-8f23-4e6b-bad0-f2c7a58fb86d/volumes" Feb 19 05:54:45 crc kubenswrapper[5012]: I0219 05:54:45.523265 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:54:45 crc kubenswrapper[5012]: I0219 05:54:45.570291 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7r84\" (UniqueName: \"kubernetes.io/projected/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-kube-api-access-d7r84\") pod \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " Feb 19 05:54:45 crc kubenswrapper[5012]: I0219 05:54:45.570496 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-inventory\") pod \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " Feb 19 05:54:45 crc kubenswrapper[5012]: I0219 05:54:45.570600 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-ssh-key-openstack-edpm-ipam\") pod \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\" (UID: \"a37d4335-7c06-4fa3-af51-6cfe6fb9a020\") " Feb 19 05:54:45 crc kubenswrapper[5012]: I0219 05:54:45.581346 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-kube-api-access-d7r84" (OuterVolumeSpecName: "kube-api-access-d7r84") pod "a37d4335-7c06-4fa3-af51-6cfe6fb9a020" (UID: "a37d4335-7c06-4fa3-af51-6cfe6fb9a020"). InnerVolumeSpecName "kube-api-access-d7r84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:54:45 crc kubenswrapper[5012]: I0219 05:54:45.607567 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-inventory" (OuterVolumeSpecName: "inventory") pod "a37d4335-7c06-4fa3-af51-6cfe6fb9a020" (UID: "a37d4335-7c06-4fa3-af51-6cfe6fb9a020"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:54:45 crc kubenswrapper[5012]: I0219 05:54:45.630291 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a37d4335-7c06-4fa3-af51-6cfe6fb9a020" (UID: "a37d4335-7c06-4fa3-af51-6cfe6fb9a020"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:54:45 crc kubenswrapper[5012]: I0219 05:54:45.674721 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:54:45 crc kubenswrapper[5012]: I0219 05:54:45.674775 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:54:45 crc kubenswrapper[5012]: I0219 05:54:45.674799 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7r84\" (UniqueName: \"kubernetes.io/projected/a37d4335-7c06-4fa3-af51-6cfe6fb9a020-kube-api-access-d7r84\") on node \"crc\" DevicePath \"\"" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.025529 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" event={"ID":"a37d4335-7c06-4fa3-af51-6cfe6fb9a020","Type":"ContainerDied","Data":"85c3a632b87baefd0f4635bc2948e782ad63514e78084b1f3a6e81fb5d16f7ff"} Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.025826 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85c3a632b87baefd0f4635bc2948e782ad63514e78084b1f3a6e81fb5d16f7ff" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.025618 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8sh74" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.160540 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6"] Feb 19 05:54:46 crc kubenswrapper[5012]: E0219 05:54:46.161073 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37d4335-7c06-4fa3-af51-6cfe6fb9a020" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.161096 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37d4335-7c06-4fa3-af51-6cfe6fb9a020" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.161366 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a37d4335-7c06-4fa3-af51-6cfe6fb9a020" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.162255 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.164514 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.164572 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.164590 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.165593 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.174037 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6"] Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.298562 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.298755 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6fjt\" (UniqueName: \"kubernetes.io/projected/cdccd552-e703-4d8d-86b4-ff481671527f-kube-api-access-b6fjt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.298892 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.401185 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.401298 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6fjt\" (UniqueName: \"kubernetes.io/projected/cdccd552-e703-4d8d-86b4-ff481671527f-kube-api-access-b6fjt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.401360 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.406657 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.407508 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.434988 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6fjt\" (UniqueName: \"kubernetes.io/projected/cdccd552-e703-4d8d-86b4-ff481671527f-kube-api-access-b6fjt\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.508871 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:46 crc kubenswrapper[5012]: I0219 05:54:46.880760 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6"] Feb 19 05:54:47 crc kubenswrapper[5012]: I0219 05:54:47.038512 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" event={"ID":"cdccd552-e703-4d8d-86b4-ff481671527f","Type":"ContainerStarted","Data":"d31d06bb65a0505a8f6ea016de34ab00f00f1edb87ee9e48c8ef5ffe39a1a5e9"} Feb 19 05:54:48 crc kubenswrapper[5012]: I0219 05:54:48.060145 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" event={"ID":"cdccd552-e703-4d8d-86b4-ff481671527f","Type":"ContainerStarted","Data":"c4a36204435a7c333a565cd24f8b73764b976d3c9f94d39e5fc9e35c932d1d2c"} Feb 19 05:54:48 crc kubenswrapper[5012]: I0219 05:54:48.100868 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" podStartSLOduration=1.61343835 podStartE2EDuration="2.100848977s" podCreationTimestamp="2026-02-19 05:54:46 +0000 UTC" firstStartedPulling="2026-02-19 05:54:46.892287808 +0000 UTC m=+1782.925610377" lastFinishedPulling="2026-02-19 05:54:47.379698395 +0000 UTC m=+1783.413021004" observedRunningTime="2026-02-19 05:54:48.09078477 +0000 UTC m=+1784.124107339" watchObservedRunningTime="2026-02-19 05:54:48.100848977 +0000 UTC m=+1784.134171546" Feb 19 05:54:49 crc kubenswrapper[5012]: I0219 05:54:49.703183 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:54:50 crc kubenswrapper[5012]: I0219 05:54:50.088803 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"50740295b4ff1d8fcf9687906fffd0580ff7c4139e466c7a77580870ab679afe"} Feb 19 05:54:52 crc kubenswrapper[5012]: E0219 05:54:52.910873 5012 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdccd552_e703_4d8d_86b4_ff481671527f.slice/crio-c4a36204435a7c333a565cd24f8b73764b976d3c9f94d39e5fc9e35c932d1d2c.scope\": RecentStats: unable to find data in memory cache]" Feb 19 05:54:53 crc kubenswrapper[5012]: I0219 05:54:53.155921 5012 generic.go:334] "Generic (PLEG): container finished" podID="cdccd552-e703-4d8d-86b4-ff481671527f" containerID="c4a36204435a7c333a565cd24f8b73764b976d3c9f94d39e5fc9e35c932d1d2c" exitCode=0 Feb 19 05:54:53 crc kubenswrapper[5012]: I0219 05:54:53.156053 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" event={"ID":"cdccd552-e703-4d8d-86b4-ff481671527f","Type":"ContainerDied","Data":"c4a36204435a7c333a565cd24f8b73764b976d3c9f94d39e5fc9e35c932d1d2c"} Feb 19 05:54:54 crc kubenswrapper[5012]: I0219 05:54:54.784753 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:54 crc kubenswrapper[5012]: I0219 05:54:54.846417 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6fjt\" (UniqueName: \"kubernetes.io/projected/cdccd552-e703-4d8d-86b4-ff481671527f-kube-api-access-b6fjt\") pod \"cdccd552-e703-4d8d-86b4-ff481671527f\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " Feb 19 05:54:54 crc kubenswrapper[5012]: I0219 05:54:54.846647 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-ssh-key-openstack-edpm-ipam\") pod \"cdccd552-e703-4d8d-86b4-ff481671527f\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " Feb 19 05:54:54 crc kubenswrapper[5012]: I0219 05:54:54.846711 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-inventory\") pod \"cdccd552-e703-4d8d-86b4-ff481671527f\" (UID: \"cdccd552-e703-4d8d-86b4-ff481671527f\") " Feb 19 05:54:54 crc kubenswrapper[5012]: I0219 05:54:54.868194 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdccd552-e703-4d8d-86b4-ff481671527f-kube-api-access-b6fjt" (OuterVolumeSpecName: "kube-api-access-b6fjt") pod "cdccd552-e703-4d8d-86b4-ff481671527f" (UID: "cdccd552-e703-4d8d-86b4-ff481671527f"). InnerVolumeSpecName "kube-api-access-b6fjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:54:54 crc kubenswrapper[5012]: I0219 05:54:54.883490 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cdccd552-e703-4d8d-86b4-ff481671527f" (UID: "cdccd552-e703-4d8d-86b4-ff481671527f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:54:54 crc kubenswrapper[5012]: I0219 05:54:54.904174 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-inventory" (OuterVolumeSpecName: "inventory") pod "cdccd552-e703-4d8d-86b4-ff481671527f" (UID: "cdccd552-e703-4d8d-86b4-ff481671527f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:54:54 crc kubenswrapper[5012]: I0219 05:54:54.948618 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6fjt\" (UniqueName: \"kubernetes.io/projected/cdccd552-e703-4d8d-86b4-ff481671527f-kube-api-access-b6fjt\") on node \"crc\" DevicePath \"\"" Feb 19 05:54:54 crc kubenswrapper[5012]: I0219 05:54:54.948680 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:54:54 crc kubenswrapper[5012]: I0219 05:54:54.948697 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cdccd552-e703-4d8d-86b4-ff481671527f-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.237780 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" event={"ID":"cdccd552-e703-4d8d-86b4-ff481671527f","Type":"ContainerDied","Data":"d31d06bb65a0505a8f6ea016de34ab00f00f1edb87ee9e48c8ef5ffe39a1a5e9"} Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.237832 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d31d06bb65a0505a8f6ea016de34ab00f00f1edb87ee9e48c8ef5ffe39a1a5e9" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.239433 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.291462 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7"] Feb 19 05:54:55 crc kubenswrapper[5012]: E0219 05:54:55.292613 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdccd552-e703-4d8d-86b4-ff481671527f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.292639 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdccd552-e703-4d8d-86b4-ff481671527f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.292876 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdccd552-e703-4d8d-86b4-ff481671527f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.293846 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.297819 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.297991 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.298020 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.300370 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.326056 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7"] Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.359745 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kjhk7\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.360338 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5dxz\" (UniqueName: \"kubernetes.io/projected/0037b322-99bb-4ae2-aba4-85ddcd8243ae-kube-api-access-b5dxz\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kjhk7\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.360495 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kjhk7\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.461538 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kjhk7\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.461680 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kjhk7\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.461732 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5dxz\" (UniqueName: \"kubernetes.io/projected/0037b322-99bb-4ae2-aba4-85ddcd8243ae-kube-api-access-b5dxz\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kjhk7\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.475181 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kjhk7\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.487028 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5dxz\" (UniqueName: \"kubernetes.io/projected/0037b322-99bb-4ae2-aba4-85ddcd8243ae-kube-api-access-b5dxz\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kjhk7\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.487482 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kjhk7\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:55 crc kubenswrapper[5012]: I0219 05:54:55.629246 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:54:56 crc kubenswrapper[5012]: I0219 05:54:56.257746 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7"] Feb 19 05:54:57 crc kubenswrapper[5012]: I0219 05:54:57.267216 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" event={"ID":"0037b322-99bb-4ae2-aba4-85ddcd8243ae","Type":"ContainerStarted","Data":"fc664419fd06ca01d4b66c021c3502deae780162959a25fba1e04fbdb98da62a"} Feb 19 05:54:57 crc kubenswrapper[5012]: I0219 05:54:57.270064 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" event={"ID":"0037b322-99bb-4ae2-aba4-85ddcd8243ae","Type":"ContainerStarted","Data":"464052eadd097af96e9cb927005eb1d2b0c38df05bb8893ce205e2fbdb42a86d"} Feb 19 05:54:57 crc kubenswrapper[5012]: I0219 05:54:57.294348 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" podStartSLOduration=1.881208203 podStartE2EDuration="2.29431896s" podCreationTimestamp="2026-02-19 05:54:55 +0000 UTC" firstStartedPulling="2026-02-19 05:54:56.273779526 +0000 UTC m=+1792.307102135" lastFinishedPulling="2026-02-19 05:54:56.686890283 +0000 UTC m=+1792.720212892" observedRunningTime="2026-02-19 05:54:57.288171749 +0000 UTC m=+1793.321494328" watchObservedRunningTime="2026-02-19 05:54:57.29431896 +0000 UTC m=+1793.327641539" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.210494 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6m2dh"] Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.213792 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.227796 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6m2dh"] Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.310788 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-catalog-content\") pod \"community-operators-6m2dh\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.310982 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5srf\" (UniqueName: \"kubernetes.io/projected/93c01ce3-3353-4008-b521-c13b78700f14-kube-api-access-d5srf\") pod \"community-operators-6m2dh\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.311040 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-utilities\") pod \"community-operators-6m2dh\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.417157 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-catalog-content\") pod \"community-operators-6m2dh\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.417275 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5srf\" (UniqueName: \"kubernetes.io/projected/93c01ce3-3353-4008-b521-c13b78700f14-kube-api-access-d5srf\") pod \"community-operators-6m2dh\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.417356 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-utilities\") pod \"community-operators-6m2dh\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.417944 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-utilities\") pod \"community-operators-6m2dh\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.418175 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-catalog-content\") pod \"community-operators-6m2dh\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.450718 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5srf\" (UniqueName: \"kubernetes.io/projected/93c01ce3-3353-4008-b521-c13b78700f14-kube-api-access-d5srf\") pod \"community-operators-6m2dh\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:15 crc kubenswrapper[5012]: I0219 05:55:15.548586 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:16 crc kubenswrapper[5012]: I0219 05:55:16.111144 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6m2dh"] Feb 19 05:55:16 crc kubenswrapper[5012]: W0219 05:55:16.112881 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93c01ce3_3353_4008_b521_c13b78700f14.slice/crio-e6cf0ed98263ef58107b635a456c4463f2ad4e1b413edd3f453e2a6c64e1f798 WatchSource:0}: Error finding container e6cf0ed98263ef58107b635a456c4463f2ad4e1b413edd3f453e2a6c64e1f798: Status 404 returned error can't find the container with id e6cf0ed98263ef58107b635a456c4463f2ad4e1b413edd3f453e2a6c64e1f798 Feb 19 05:55:16 crc kubenswrapper[5012]: I0219 05:55:16.480007 5012 generic.go:334] "Generic (PLEG): container finished" podID="93c01ce3-3353-4008-b521-c13b78700f14" containerID="88705a5b47e877865905bfec0d79a37661c7afd39bd29b3b62dcb301a3a591e6" exitCode=0 Feb 19 05:55:16 crc kubenswrapper[5012]: I0219 05:55:16.480085 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6m2dh" event={"ID":"93c01ce3-3353-4008-b521-c13b78700f14","Type":"ContainerDied","Data":"88705a5b47e877865905bfec0d79a37661c7afd39bd29b3b62dcb301a3a591e6"} Feb 19 05:55:16 crc kubenswrapper[5012]: I0219 05:55:16.480429 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6m2dh" event={"ID":"93c01ce3-3353-4008-b521-c13b78700f14","Type":"ContainerStarted","Data":"e6cf0ed98263ef58107b635a456c4463f2ad4e1b413edd3f453e2a6c64e1f798"} Feb 19 05:55:17 crc kubenswrapper[5012]: I0219 05:55:17.498958 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6m2dh" event={"ID":"93c01ce3-3353-4008-b521-c13b78700f14","Type":"ContainerStarted","Data":"f2c73daa7912b8b42ff12ed6bf21505d1239d1f38a626a18bd4a378076264990"} Feb 19 05:55:18 crc kubenswrapper[5012]: I0219 05:55:18.512105 5012 generic.go:334] "Generic (PLEG): container finished" podID="93c01ce3-3353-4008-b521-c13b78700f14" containerID="f2c73daa7912b8b42ff12ed6bf21505d1239d1f38a626a18bd4a378076264990" exitCode=0 Feb 19 05:55:18 crc kubenswrapper[5012]: I0219 05:55:18.512172 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6m2dh" event={"ID":"93c01ce3-3353-4008-b521-c13b78700f14","Type":"ContainerDied","Data":"f2c73daa7912b8b42ff12ed6bf21505d1239d1f38a626a18bd4a378076264990"} Feb 19 05:55:19 crc kubenswrapper[5012]: I0219 05:55:19.065835 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vz94t"] Feb 19 05:55:19 crc kubenswrapper[5012]: I0219 05:55:19.074010 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vz94t"] Feb 19 05:55:19 crc kubenswrapper[5012]: I0219 05:55:19.525543 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6m2dh" event={"ID":"93c01ce3-3353-4008-b521-c13b78700f14","Type":"ContainerStarted","Data":"7a7a28b9019ae634e7419610ad5d6e6779acece549fd96ddb1633a5dbbf4b985"} Feb 19 05:55:19 crc kubenswrapper[5012]: I0219 05:55:19.550763 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6m2dh" podStartSLOduration=2.130497702 podStartE2EDuration="4.550743789s" podCreationTimestamp="2026-02-19 05:55:15 +0000 UTC" firstStartedPulling="2026-02-19 05:55:16.48255306 +0000 UTC m=+1812.515875629" lastFinishedPulling="2026-02-19 05:55:18.902799137 +0000 UTC m=+1814.936121716" observedRunningTime="2026-02-19 05:55:19.543791207 +0000 UTC m=+1815.577113786" watchObservedRunningTime="2026-02-19 05:55:19.550743789 +0000 UTC m=+1815.584066358" Feb 19 05:55:20 crc kubenswrapper[5012]: I0219 05:55:20.728630 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f256783-305c-4782-81c0-5aed8867b7e3" path="/var/lib/kubelet/pods/3f256783-305c-4782-81c0-5aed8867b7e3/volumes" Feb 19 05:55:25 crc kubenswrapper[5012]: I0219 05:55:25.549740 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:25 crc kubenswrapper[5012]: I0219 05:55:25.552493 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:25 crc kubenswrapper[5012]: I0219 05:55:25.633889 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:25 crc kubenswrapper[5012]: I0219 05:55:25.726358 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:25 crc kubenswrapper[5012]: I0219 05:55:25.876952 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6m2dh"] Feb 19 05:55:27 crc kubenswrapper[5012]: I0219 05:55:27.624854 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6m2dh" podUID="93c01ce3-3353-4008-b521-c13b78700f14" containerName="registry-server" containerID="cri-o://7a7a28b9019ae634e7419610ad5d6e6779acece549fd96ddb1633a5dbbf4b985" gracePeriod=2 Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.636359 5012 generic.go:334] "Generic (PLEG): container finished" podID="93c01ce3-3353-4008-b521-c13b78700f14" containerID="7a7a28b9019ae634e7419610ad5d6e6779acece549fd96ddb1633a5dbbf4b985" exitCode=0 Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.636458 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6m2dh" event={"ID":"93c01ce3-3353-4008-b521-c13b78700f14","Type":"ContainerDied","Data":"7a7a28b9019ae634e7419610ad5d6e6779acece549fd96ddb1633a5dbbf4b985"} Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.636759 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6m2dh" event={"ID":"93c01ce3-3353-4008-b521-c13b78700f14","Type":"ContainerDied","Data":"e6cf0ed98263ef58107b635a456c4463f2ad4e1b413edd3f453e2a6c64e1f798"} Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.636783 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6cf0ed98263ef58107b635a456c4463f2ad4e1b413edd3f453e2a6c64e1f798" Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.735806 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.866376 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5srf\" (UniqueName: \"kubernetes.io/projected/93c01ce3-3353-4008-b521-c13b78700f14-kube-api-access-d5srf\") pod \"93c01ce3-3353-4008-b521-c13b78700f14\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.866809 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-utilities\") pod \"93c01ce3-3353-4008-b521-c13b78700f14\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.867116 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-catalog-content\") pod \"93c01ce3-3353-4008-b521-c13b78700f14\" (UID: \"93c01ce3-3353-4008-b521-c13b78700f14\") " Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.867789 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-utilities" (OuterVolumeSpecName: "utilities") pod "93c01ce3-3353-4008-b521-c13b78700f14" (UID: "93c01ce3-3353-4008-b521-c13b78700f14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.875463 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c01ce3-3353-4008-b521-c13b78700f14-kube-api-access-d5srf" (OuterVolumeSpecName: "kube-api-access-d5srf") pod "93c01ce3-3353-4008-b521-c13b78700f14" (UID: "93c01ce3-3353-4008-b521-c13b78700f14"). InnerVolumeSpecName "kube-api-access-d5srf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.921342 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93c01ce3-3353-4008-b521-c13b78700f14" (UID: "93c01ce3-3353-4008-b521-c13b78700f14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.970470 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.970517 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5srf\" (UniqueName: \"kubernetes.io/projected/93c01ce3-3353-4008-b521-c13b78700f14-kube-api-access-d5srf\") on node \"crc\" DevicePath \"\"" Feb 19 05:55:28 crc kubenswrapper[5012]: I0219 05:55:28.970534 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c01ce3-3353-4008-b521-c13b78700f14-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:55:29 crc kubenswrapper[5012]: I0219 05:55:29.645088 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6m2dh" Feb 19 05:55:29 crc kubenswrapper[5012]: I0219 05:55:29.691989 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6m2dh"] Feb 19 05:55:29 crc kubenswrapper[5012]: I0219 05:55:29.705796 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6m2dh"] Feb 19 05:55:30 crc kubenswrapper[5012]: I0219 05:55:30.727165 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c01ce3-3353-4008-b521-c13b78700f14" path="/var/lib/kubelet/pods/93c01ce3-3353-4008-b521-c13b78700f14/volumes" Feb 19 05:55:35 crc kubenswrapper[5012]: I0219 05:55:35.882477 5012 scope.go:117] "RemoveContainer" containerID="2adf806f0d4859a0678f70c1d3e40183b96910ec8d7d4b4dd3e550a8e559d848" Feb 19 05:55:35 crc kubenswrapper[5012]: I0219 05:55:35.925999 5012 scope.go:117] "RemoveContainer" containerID="32216c19b01878e03cf37157a913c36ac04ed37d7c518d5811bfc0096e2fc84b" Feb 19 05:55:35 crc kubenswrapper[5012]: I0219 05:55:35.994429 5012 scope.go:117] "RemoveContainer" containerID="43cb426b1d824281e78b0291231050744f408cc09f73ab56e4ae893d291e9f7e" Feb 19 05:55:36 crc kubenswrapper[5012]: I0219 05:55:36.053606 5012 scope.go:117] "RemoveContainer" containerID="48110d1bed52f125950a67152ee45f991adbabd56a8a45d17e8316bb03423870" Feb 19 05:55:36 crc kubenswrapper[5012]: I0219 05:55:36.097838 5012 scope.go:117] "RemoveContainer" containerID="b0ed53407a3cb3810cc4f0ec6ea8d71443cb0203ae2152d5e770b7f505f82370" Feb 19 05:55:36 crc kubenswrapper[5012]: I0219 05:55:36.165061 5012 scope.go:117] "RemoveContainer" containerID="db7dcb78edaee0fd0bebd3da354f9bac23c709f6cc5c4054736fd0aaea637cae" Feb 19 05:55:36 crc kubenswrapper[5012]: I0219 05:55:36.194560 5012 scope.go:117] "RemoveContainer" containerID="7b6605dba53e000181057a053dcadb95742b096a45a5fa3c7a87f8e866bb1bd9" Feb 19 05:55:37 crc kubenswrapper[5012]: I0219 05:55:37.744183 5012 generic.go:334] "Generic (PLEG): container finished" podID="0037b322-99bb-4ae2-aba4-85ddcd8243ae" containerID="fc664419fd06ca01d4b66c021c3502deae780162959a25fba1e04fbdb98da62a" exitCode=0 Feb 19 05:55:37 crc kubenswrapper[5012]: I0219 05:55:37.744280 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" event={"ID":"0037b322-99bb-4ae2-aba4-85ddcd8243ae","Type":"ContainerDied","Data":"fc664419fd06ca01d4b66c021c3502deae780162959a25fba1e04fbdb98da62a"} Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.301673 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.418194 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-ssh-key-openstack-edpm-ipam\") pod \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.418570 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5dxz\" (UniqueName: \"kubernetes.io/projected/0037b322-99bb-4ae2-aba4-85ddcd8243ae-kube-api-access-b5dxz\") pod \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.418616 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-inventory\") pod \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\" (UID: \"0037b322-99bb-4ae2-aba4-85ddcd8243ae\") " Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.425199 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0037b322-99bb-4ae2-aba4-85ddcd8243ae-kube-api-access-b5dxz" (OuterVolumeSpecName: "kube-api-access-b5dxz") pod "0037b322-99bb-4ae2-aba4-85ddcd8243ae" (UID: "0037b322-99bb-4ae2-aba4-85ddcd8243ae"). InnerVolumeSpecName "kube-api-access-b5dxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.454143 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-inventory" (OuterVolumeSpecName: "inventory") pod "0037b322-99bb-4ae2-aba4-85ddcd8243ae" (UID: "0037b322-99bb-4ae2-aba4-85ddcd8243ae"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.483574 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0037b322-99bb-4ae2-aba4-85ddcd8243ae" (UID: "0037b322-99bb-4ae2-aba4-85ddcd8243ae"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.521785 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.521828 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5dxz\" (UniqueName: \"kubernetes.io/projected/0037b322-99bb-4ae2-aba4-85ddcd8243ae-kube-api-access-b5dxz\") on node \"crc\" DevicePath \"\"" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.521841 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0037b322-99bb-4ae2-aba4-85ddcd8243ae-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.771776 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" event={"ID":"0037b322-99bb-4ae2-aba4-85ddcd8243ae","Type":"ContainerDied","Data":"464052eadd097af96e9cb927005eb1d2b0c38df05bb8893ce205e2fbdb42a86d"} Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.771834 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="464052eadd097af96e9cb927005eb1d2b0c38df05bb8893ce205e2fbdb42a86d" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.771867 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kjhk7" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.916790 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db"] Feb 19 05:55:39 crc kubenswrapper[5012]: E0219 05:55:39.917648 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0037b322-99bb-4ae2-aba4-85ddcd8243ae" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.917686 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0037b322-99bb-4ae2-aba4-85ddcd8243ae" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:55:39 crc kubenswrapper[5012]: E0219 05:55:39.917715 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c01ce3-3353-4008-b521-c13b78700f14" containerName="registry-server" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.917731 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c01ce3-3353-4008-b521-c13b78700f14" containerName="registry-server" Feb 19 05:55:39 crc kubenswrapper[5012]: E0219 05:55:39.917770 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c01ce3-3353-4008-b521-c13b78700f14" containerName="extract-utilities" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.917783 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c01ce3-3353-4008-b521-c13b78700f14" containerName="extract-utilities" Feb 19 05:55:39 crc kubenswrapper[5012]: E0219 05:55:39.917827 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c01ce3-3353-4008-b521-c13b78700f14" containerName="extract-content" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.917841 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c01ce3-3353-4008-b521-c13b78700f14" containerName="extract-content" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.918183 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0037b322-99bb-4ae2-aba4-85ddcd8243ae" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.918251 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c01ce3-3353-4008-b521-c13b78700f14" containerName="registry-server" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.919451 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.921812 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.921820 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.923003 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.923488 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.932254 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bg5db\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.932540 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bg5db\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.932605 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45xlf\" (UniqueName: \"kubernetes.io/projected/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-kube-api-access-45xlf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bg5db\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:39 crc kubenswrapper[5012]: I0219 05:55:39.935605 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db"] Feb 19 05:55:40 crc kubenswrapper[5012]: I0219 05:55:40.035007 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bg5db\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:40 crc kubenswrapper[5012]: I0219 05:55:40.035624 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bg5db\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:40 crc kubenswrapper[5012]: I0219 05:55:40.035827 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45xlf\" (UniqueName: \"kubernetes.io/projected/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-kube-api-access-45xlf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bg5db\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:40 crc kubenswrapper[5012]: I0219 05:55:40.038666 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bg5db\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:40 crc kubenswrapper[5012]: I0219 05:55:40.039352 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bg5db\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:40 crc kubenswrapper[5012]: I0219 05:55:40.054735 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45xlf\" (UniqueName: \"kubernetes.io/projected/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-kube-api-access-45xlf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bg5db\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:40 crc kubenswrapper[5012]: I0219 05:55:40.249159 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:55:40 crc kubenswrapper[5012]: I0219 05:55:40.854105 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 05:55:40 crc kubenswrapper[5012]: I0219 05:55:40.858520 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db"] Feb 19 05:55:41 crc kubenswrapper[5012]: I0219 05:55:41.800778 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" event={"ID":"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3","Type":"ContainerStarted","Data":"bc05575ff0cfed5776b9bbb64af2657cd2088231709e3bde4bcd64e3805d965c"} Feb 19 05:55:41 crc kubenswrapper[5012]: I0219 05:55:41.801149 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" event={"ID":"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3","Type":"ContainerStarted","Data":"15f3c2beb4606524fdb23e93928af3f16cc228ec4fea4976e2a4bd58c03f5a59"} Feb 19 05:55:41 crc kubenswrapper[5012]: I0219 05:55:41.833061 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" podStartSLOduration=2.384688291 podStartE2EDuration="2.833030735s" podCreationTimestamp="2026-02-19 05:55:39 +0000 UTC" firstStartedPulling="2026-02-19 05:55:40.853737991 +0000 UTC m=+1836.887060570" lastFinishedPulling="2026-02-19 05:55:41.302080405 +0000 UTC m=+1837.335403014" observedRunningTime="2026-02-19 05:55:41.829273322 +0000 UTC m=+1837.862595921" watchObservedRunningTime="2026-02-19 05:55:41.833030735 +0000 UTC m=+1837.866353344" Feb 19 05:55:43 crc kubenswrapper[5012]: I0219 05:55:43.066719 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-nr45z"] Feb 19 05:55:43 crc kubenswrapper[5012]: I0219 05:55:43.077864 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-nr45z"] Feb 19 05:55:44 crc kubenswrapper[5012]: I0219 05:55:44.721723 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70ce9757-cdf1-4864-95ad-9d25fb9830a9" path="/var/lib/kubelet/pods/70ce9757-cdf1-4864-95ad-9d25fb9830a9/volumes" Feb 19 05:55:47 crc kubenswrapper[5012]: I0219 05:55:47.038137 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nbn8z"] Feb 19 05:55:47 crc kubenswrapper[5012]: I0219 05:55:47.057709 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nbn8z"] Feb 19 05:55:48 crc kubenswrapper[5012]: I0219 05:55:48.724060 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fc8fbb1-0e37-419f-86e0-6ce8db99225d" path="/var/lib/kubelet/pods/7fc8fbb1-0e37-419f-86e0-6ce8db99225d/volumes" Feb 19 05:56:26 crc kubenswrapper[5012]: I0219 05:56:26.046099 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-4t5r4"] Feb 19 05:56:26 crc kubenswrapper[5012]: I0219 05:56:26.063896 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-4t5r4"] Feb 19 05:56:26 crc kubenswrapper[5012]: I0219 05:56:26.728564 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f597fc0f-7407-4f05-916c-70f7a3f145ec" path="/var/lib/kubelet/pods/f597fc0f-7407-4f05-916c-70f7a3f145ec/volumes" Feb 19 05:56:35 crc kubenswrapper[5012]: I0219 05:56:35.446149 5012 generic.go:334] "Generic (PLEG): container finished" podID="8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3" containerID="bc05575ff0cfed5776b9bbb64af2657cd2088231709e3bde4bcd64e3805d965c" exitCode=0 Feb 19 05:56:35 crc kubenswrapper[5012]: I0219 05:56:35.446272 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" event={"ID":"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3","Type":"ContainerDied","Data":"bc05575ff0cfed5776b9bbb64af2657cd2088231709e3bde4bcd64e3805d965c"} Feb 19 05:56:36 crc kubenswrapper[5012]: I0219 05:56:36.430783 5012 scope.go:117] "RemoveContainer" containerID="0b98797f9f7e97071d4699ed1c59c23ddac69aff6fb8708f48bdc42a56a8cf34" Feb 19 05:56:36 crc kubenswrapper[5012]: I0219 05:56:36.516069 5012 scope.go:117] "RemoveContainer" containerID="021fb32c8f118be6cb115c199b5bccac76ab7b25e96dc7239f4fa322280c2c3c" Feb 19 05:56:36 crc kubenswrapper[5012]: I0219 05:56:36.564183 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-59bfbf7475-v98h9" podUID="4c9aa274-240d-4d50-b38a-754dd493f351" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 19 05:56:36 crc kubenswrapper[5012]: I0219 05:56:36.593201 5012 scope.go:117] "RemoveContainer" containerID="9d9ddb4f57f745aaa08f8b6e7a9a59d578aaf776b154c4fb9be135f3d48b048d" Feb 19 05:56:36 crc kubenswrapper[5012]: I0219 05:56:36.975844 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.140710 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45xlf\" (UniqueName: \"kubernetes.io/projected/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-kube-api-access-45xlf\") pod \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.140799 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-inventory\") pod \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.140911 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-ssh-key-openstack-edpm-ipam\") pod \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\" (UID: \"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3\") " Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.148145 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-kube-api-access-45xlf" (OuterVolumeSpecName: "kube-api-access-45xlf") pod "8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3" (UID: "8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3"). InnerVolumeSpecName "kube-api-access-45xlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.168723 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3" (UID: "8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.177788 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-inventory" (OuterVolumeSpecName: "inventory") pod "8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3" (UID: "8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.244181 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45xlf\" (UniqueName: \"kubernetes.io/projected/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-kube-api-access-45xlf\") on node \"crc\" DevicePath \"\"" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.244233 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.244252 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.482106 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" event={"ID":"8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3","Type":"ContainerDied","Data":"15f3c2beb4606524fdb23e93928af3f16cc228ec4fea4976e2a4bd58c03f5a59"} Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.482168 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15f3c2beb4606524fdb23e93928af3f16cc228ec4fea4976e2a4bd58c03f5a59" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.482224 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bg5db" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.607939 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9rlns"] Feb 19 05:56:37 crc kubenswrapper[5012]: E0219 05:56:37.608857 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.608880 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.609523 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.610567 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.612733 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.613300 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.613647 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.613843 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.637015 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9rlns"] Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.757643 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9rlns\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.757759 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9rlns\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.757905 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl2vb\" (UniqueName: \"kubernetes.io/projected/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-kube-api-access-zl2vb\") pod \"ssh-known-hosts-edpm-deployment-9rlns\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.860241 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9rlns\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.860354 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9rlns\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.860472 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl2vb\" (UniqueName: \"kubernetes.io/projected/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-kube-api-access-zl2vb\") pod \"ssh-known-hosts-edpm-deployment-9rlns\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.866137 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-9rlns\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.874334 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-9rlns\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.892679 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl2vb\" (UniqueName: \"kubernetes.io/projected/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-kube-api-access-zl2vb\") pod \"ssh-known-hosts-edpm-deployment-9rlns\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:37 crc kubenswrapper[5012]: I0219 05:56:37.940160 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:38 crc kubenswrapper[5012]: I0219 05:56:38.625658 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-9rlns"] Feb 19 05:56:38 crc kubenswrapper[5012]: W0219 05:56:38.631483 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7c29e8e_a085_4dcc_8dbf_7fa1f971a4dc.slice/crio-4ebc58fe5109cb0c462c6d1a9d43129817e32a3c068ba09c30dc8de28f4564e1 WatchSource:0}: Error finding container 4ebc58fe5109cb0c462c6d1a9d43129817e32a3c068ba09c30dc8de28f4564e1: Status 404 returned error can't find the container with id 4ebc58fe5109cb0c462c6d1a9d43129817e32a3c068ba09c30dc8de28f4564e1 Feb 19 05:56:39 crc kubenswrapper[5012]: I0219 05:56:39.506654 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" event={"ID":"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc","Type":"ContainerStarted","Data":"0457ea42f5ff1d0e68bcd711d97118c2a58a9eacf2c96d3ae743704c8fa2175b"} Feb 19 05:56:39 crc kubenswrapper[5012]: I0219 05:56:39.506970 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" event={"ID":"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc","Type":"ContainerStarted","Data":"4ebc58fe5109cb0c462c6d1a9d43129817e32a3c068ba09c30dc8de28f4564e1"} Feb 19 05:56:39 crc kubenswrapper[5012]: I0219 05:56:39.543498 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" podStartSLOduration=2.055351882 podStartE2EDuration="2.543480426s" podCreationTimestamp="2026-02-19 05:56:37 +0000 UTC" firstStartedPulling="2026-02-19 05:56:38.638754719 +0000 UTC m=+1894.672077328" lastFinishedPulling="2026-02-19 05:56:39.126883293 +0000 UTC m=+1895.160205872" observedRunningTime="2026-02-19 05:56:39.527295817 +0000 UTC m=+1895.560618406" watchObservedRunningTime="2026-02-19 05:56:39.543480426 +0000 UTC m=+1895.576802995" Feb 19 05:56:47 crc kubenswrapper[5012]: I0219 05:56:47.626600 5012 generic.go:334] "Generic (PLEG): container finished" podID="f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc" containerID="0457ea42f5ff1d0e68bcd711d97118c2a58a9eacf2c96d3ae743704c8fa2175b" exitCode=0 Feb 19 05:56:47 crc kubenswrapper[5012]: I0219 05:56:47.626730 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" event={"ID":"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc","Type":"ContainerDied","Data":"0457ea42f5ff1d0e68bcd711d97118c2a58a9eacf2c96d3ae743704c8fa2175b"} Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.106567 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.216033 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-ssh-key-openstack-edpm-ipam\") pod \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.216157 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-inventory-0\") pod \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.217066 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl2vb\" (UniqueName: \"kubernetes.io/projected/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-kube-api-access-zl2vb\") pod \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\" (UID: \"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc\") " Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.224932 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-kube-api-access-zl2vb" (OuterVolumeSpecName: "kube-api-access-zl2vb") pod "f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc" (UID: "f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc"). InnerVolumeSpecName "kube-api-access-zl2vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.263729 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc" (UID: "f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.270916 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc" (UID: "f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.319091 5012 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.319137 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl2vb\" (UniqueName: \"kubernetes.io/projected/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-kube-api-access-zl2vb\") on node \"crc\" DevicePath \"\"" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.319159 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.658292 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" event={"ID":"f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc","Type":"ContainerDied","Data":"4ebc58fe5109cb0c462c6d1a9d43129817e32a3c068ba09c30dc8de28f4564e1"} Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.658384 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ebc58fe5109cb0c462c6d1a9d43129817e32a3c068ba09c30dc8de28f4564e1" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.658442 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-9rlns" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.800426 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl"] Feb 19 05:56:49 crc kubenswrapper[5012]: E0219 05:56:49.801072 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc" containerName="ssh-known-hosts-edpm-deployment" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.801103 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc" containerName="ssh-known-hosts-edpm-deployment" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.801571 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc" containerName="ssh-known-hosts-edpm-deployment" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.802760 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.808442 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl"] Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.809596 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.809923 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.810114 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.810290 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.935574 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xnxl\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.936000 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxxv9\" (UniqueName: \"kubernetes.io/projected/86b984ed-bd52-4348-9415-dccff4a0e1a4-kube-api-access-gxxv9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xnxl\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:49 crc kubenswrapper[5012]: I0219 05:56:49.936080 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xnxl\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:50 crc kubenswrapper[5012]: I0219 05:56:50.038201 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxxv9\" (UniqueName: \"kubernetes.io/projected/86b984ed-bd52-4348-9415-dccff4a0e1a4-kube-api-access-gxxv9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xnxl\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:50 crc kubenswrapper[5012]: I0219 05:56:50.038407 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xnxl\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:50 crc kubenswrapper[5012]: I0219 05:56:50.038672 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xnxl\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:50 crc kubenswrapper[5012]: I0219 05:56:50.046067 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xnxl\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:50 crc kubenswrapper[5012]: I0219 05:56:50.052226 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xnxl\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:50 crc kubenswrapper[5012]: I0219 05:56:50.058063 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxxv9\" (UniqueName: \"kubernetes.io/projected/86b984ed-bd52-4348-9415-dccff4a0e1a4-kube-api-access-gxxv9\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xnxl\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:50 crc kubenswrapper[5012]: I0219 05:56:50.143985 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:56:50 crc kubenswrapper[5012]: I0219 05:56:50.533748 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl"] Feb 19 05:56:50 crc kubenswrapper[5012]: I0219 05:56:50.670092 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" event={"ID":"86b984ed-bd52-4348-9415-dccff4a0e1a4","Type":"ContainerStarted","Data":"0beb898611f4ed15a0cd19f9789f1a08d1d7d38c2e8a2aea667948b1e8299e92"} Feb 19 05:56:51 crc kubenswrapper[5012]: I0219 05:56:51.684900 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" event={"ID":"86b984ed-bd52-4348-9415-dccff4a0e1a4","Type":"ContainerStarted","Data":"004667b1fced0c38d1f3741b0ac4b7f40b4fff628ed0873ae170dd8144935811"} Feb 19 05:56:51 crc kubenswrapper[5012]: I0219 05:56:51.742381 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" podStartSLOduration=2.365669061 podStartE2EDuration="2.74235108s" podCreationTimestamp="2026-02-19 05:56:49 +0000 UTC" firstStartedPulling="2026-02-19 05:56:50.538435073 +0000 UTC m=+1906.571757672" lastFinishedPulling="2026-02-19 05:56:50.915117122 +0000 UTC m=+1906.948439691" observedRunningTime="2026-02-19 05:56:51.732270562 +0000 UTC m=+1907.765593131" watchObservedRunningTime="2026-02-19 05:56:51.74235108 +0000 UTC m=+1907.775673669" Feb 19 05:56:59 crc kubenswrapper[5012]: I0219 05:56:59.784130 5012 generic.go:334] "Generic (PLEG): container finished" podID="86b984ed-bd52-4348-9415-dccff4a0e1a4" containerID="004667b1fced0c38d1f3741b0ac4b7f40b4fff628ed0873ae170dd8144935811" exitCode=0 Feb 19 05:56:59 crc kubenswrapper[5012]: I0219 05:56:59.784266 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" event={"ID":"86b984ed-bd52-4348-9415-dccff4a0e1a4","Type":"ContainerDied","Data":"004667b1fced0c38d1f3741b0ac4b7f40b4fff628ed0873ae170dd8144935811"} Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.249874 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.302974 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-inventory\") pod \"86b984ed-bd52-4348-9415-dccff4a0e1a4\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.303180 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxxv9\" (UniqueName: \"kubernetes.io/projected/86b984ed-bd52-4348-9415-dccff4a0e1a4-kube-api-access-gxxv9\") pod \"86b984ed-bd52-4348-9415-dccff4a0e1a4\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.303225 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-ssh-key-openstack-edpm-ipam\") pod \"86b984ed-bd52-4348-9415-dccff4a0e1a4\" (UID: \"86b984ed-bd52-4348-9415-dccff4a0e1a4\") " Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.310979 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86b984ed-bd52-4348-9415-dccff4a0e1a4-kube-api-access-gxxv9" (OuterVolumeSpecName: "kube-api-access-gxxv9") pod "86b984ed-bd52-4348-9415-dccff4a0e1a4" (UID: "86b984ed-bd52-4348-9415-dccff4a0e1a4"). InnerVolumeSpecName "kube-api-access-gxxv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.332883 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-inventory" (OuterVolumeSpecName: "inventory") pod "86b984ed-bd52-4348-9415-dccff4a0e1a4" (UID: "86b984ed-bd52-4348-9415-dccff4a0e1a4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.333520 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "86b984ed-bd52-4348-9415-dccff4a0e1a4" (UID: "86b984ed-bd52-4348-9415-dccff4a0e1a4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.406426 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.406465 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxxv9\" (UniqueName: \"kubernetes.io/projected/86b984ed-bd52-4348-9415-dccff4a0e1a4-kube-api-access-gxxv9\") on node \"crc\" DevicePath \"\"" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.406479 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86b984ed-bd52-4348-9415-dccff4a0e1a4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.809368 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" event={"ID":"86b984ed-bd52-4348-9415-dccff4a0e1a4","Type":"ContainerDied","Data":"0beb898611f4ed15a0cd19f9789f1a08d1d7d38c2e8a2aea667948b1e8299e92"} Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.809430 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0beb898611f4ed15a0cd19f9789f1a08d1d7d38c2e8a2aea667948b1e8299e92" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.809431 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xnxl" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.908463 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs"] Feb 19 05:57:01 crc kubenswrapper[5012]: E0219 05:57:01.909140 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b984ed-bd52-4348-9415-dccff4a0e1a4" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.909160 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b984ed-bd52-4348-9415-dccff4a0e1a4" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.909518 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="86b984ed-bd52-4348-9415-dccff4a0e1a4" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.910255 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.912452 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.912787 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.912852 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.913162 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:57:01 crc kubenswrapper[5012]: I0219 05:57:01.929908 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs"] Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.019970 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pqjv\" (UniqueName: \"kubernetes.io/projected/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-kube-api-access-7pqjv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.020057 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.020572 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.123269 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pqjv\" (UniqueName: \"kubernetes.io/projected/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-kube-api-access-7pqjv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.123420 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.123742 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.132066 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.141360 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.145729 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pqjv\" (UniqueName: \"kubernetes.io/projected/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-kube-api-access-7pqjv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.237590 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:02 crc kubenswrapper[5012]: I0219 05:57:02.917784 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs"] Feb 19 05:57:03 crc kubenswrapper[5012]: I0219 05:57:03.843885 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" event={"ID":"464de984-0dd6-4c4d-aed3-afbf84e0cdcf","Type":"ContainerStarted","Data":"9a4a0914747cd2de20603a5e64c71f5edb88e8b126aebbddab0dc4bad3dbf103"} Feb 19 05:57:03 crc kubenswrapper[5012]: I0219 05:57:03.844662 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" event={"ID":"464de984-0dd6-4c4d-aed3-afbf84e0cdcf","Type":"ContainerStarted","Data":"3a6afe214b6ff21a320431d18029a8b3ca770fd5e32e700ac909bbecfbfc0c9b"} Feb 19 05:57:03 crc kubenswrapper[5012]: I0219 05:57:03.881817 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" podStartSLOduration=2.421210464 podStartE2EDuration="2.88179754s" podCreationTimestamp="2026-02-19 05:57:01 +0000 UTC" firstStartedPulling="2026-02-19 05:57:02.922858257 +0000 UTC m=+1918.956180866" lastFinishedPulling="2026-02-19 05:57:03.383445333 +0000 UTC m=+1919.416767942" observedRunningTime="2026-02-19 05:57:03.873927136 +0000 UTC m=+1919.907249745" watchObservedRunningTime="2026-02-19 05:57:03.88179754 +0000 UTC m=+1919.915120139" Feb 19 05:57:13 crc kubenswrapper[5012]: I0219 05:57:13.982891 5012 generic.go:334] "Generic (PLEG): container finished" podID="464de984-0dd6-4c4d-aed3-afbf84e0cdcf" containerID="9a4a0914747cd2de20603a5e64c71f5edb88e8b126aebbddab0dc4bad3dbf103" exitCode=0 Feb 19 05:57:13 crc kubenswrapper[5012]: I0219 05:57:13.983028 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" event={"ID":"464de984-0dd6-4c4d-aed3-afbf84e0cdcf","Type":"ContainerDied","Data":"9a4a0914747cd2de20603a5e64c71f5edb88e8b126aebbddab0dc4bad3dbf103"} Feb 19 05:57:14 crc kubenswrapper[5012]: I0219 05:57:14.430616 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:57:14 crc kubenswrapper[5012]: I0219 05:57:14.430666 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:57:15 crc kubenswrapper[5012]: I0219 05:57:15.519249 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:15 crc kubenswrapper[5012]: I0219 05:57:15.558851 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-ssh-key-openstack-edpm-ipam\") pod \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " Feb 19 05:57:15 crc kubenswrapper[5012]: I0219 05:57:15.559058 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pqjv\" (UniqueName: \"kubernetes.io/projected/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-kube-api-access-7pqjv\") pod \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " Feb 19 05:57:15 crc kubenswrapper[5012]: I0219 05:57:15.559148 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-inventory\") pod \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\" (UID: \"464de984-0dd6-4c4d-aed3-afbf84e0cdcf\") " Feb 19 05:57:15 crc kubenswrapper[5012]: I0219 05:57:15.571667 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-kube-api-access-7pqjv" (OuterVolumeSpecName: "kube-api-access-7pqjv") pod "464de984-0dd6-4c4d-aed3-afbf84e0cdcf" (UID: "464de984-0dd6-4c4d-aed3-afbf84e0cdcf"). InnerVolumeSpecName "kube-api-access-7pqjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:57:15 crc kubenswrapper[5012]: I0219 05:57:15.615283 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-inventory" (OuterVolumeSpecName: "inventory") pod "464de984-0dd6-4c4d-aed3-afbf84e0cdcf" (UID: "464de984-0dd6-4c4d-aed3-afbf84e0cdcf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:57:15 crc kubenswrapper[5012]: I0219 05:57:15.616286 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "464de984-0dd6-4c4d-aed3-afbf84e0cdcf" (UID: "464de984-0dd6-4c4d-aed3-afbf84e0cdcf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:57:15 crc kubenswrapper[5012]: I0219 05:57:15.660719 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:57:15 crc kubenswrapper[5012]: I0219 05:57:15.660758 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:57:15 crc kubenswrapper[5012]: I0219 05:57:15.660770 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pqjv\" (UniqueName: \"kubernetes.io/projected/464de984-0dd6-4c4d-aed3-afbf84e0cdcf-kube-api-access-7pqjv\") on node \"crc\" DevicePath \"\"" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.024827 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" event={"ID":"464de984-0dd6-4c4d-aed3-afbf84e0cdcf","Type":"ContainerDied","Data":"3a6afe214b6ff21a320431d18029a8b3ca770fd5e32e700ac909bbecfbfc0c9b"} Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.025192 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a6afe214b6ff21a320431d18029a8b3ca770fd5e32e700ac909bbecfbfc0c9b" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.024895 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.139548 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5"] Feb 19 05:57:16 crc kubenswrapper[5012]: E0219 05:57:16.139983 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="464de984-0dd6-4c4d-aed3-afbf84e0cdcf" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.140004 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="464de984-0dd6-4c4d-aed3-afbf84e0cdcf" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.140222 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="464de984-0dd6-4c4d-aed3-afbf84e0cdcf" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.141002 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.143367 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.144668 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.145672 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.146050 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.146483 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.146494 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.147117 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.147682 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.170254 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5"] Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174223 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174313 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174362 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174398 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174474 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf9mg\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-kube-api-access-zf9mg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174537 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174564 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174589 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174609 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174632 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174677 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174714 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174751 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.174783 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276488 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276531 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276567 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276588 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276613 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276668 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276702 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276737 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276768 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276794 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276836 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276875 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276911 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.276980 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf9mg\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-kube-api-access-zf9mg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.281105 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.281927 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.281945 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.282445 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.283065 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.283790 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.284791 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.285493 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.285667 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.285724 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.285786 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.287103 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.290290 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.296335 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf9mg\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-kube-api-access-zf9mg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.459998 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:57:16 crc kubenswrapper[5012]: I0219 05:57:16.850834 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5"] Feb 19 05:57:17 crc kubenswrapper[5012]: I0219 05:57:17.036848 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" event={"ID":"d869003b-7b03-4a8b-9f9c-73ca0ec4f359","Type":"ContainerStarted","Data":"c79daf974f33d700f2f0838eecbe85e8cd1c1c0b3e0d9db46ea76aecfbdd9d4f"} Feb 19 05:57:18 crc kubenswrapper[5012]: I0219 05:57:18.051183 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" event={"ID":"d869003b-7b03-4a8b-9f9c-73ca0ec4f359","Type":"ContainerStarted","Data":"3083a62009353315b9fc731e7060bf2fdf582bb6651774b6421dccef77849ffe"} Feb 19 05:57:18 crc kubenswrapper[5012]: I0219 05:57:18.084008 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" podStartSLOduration=1.603627669 podStartE2EDuration="2.083987551s" podCreationTimestamp="2026-02-19 05:57:16 +0000 UTC" firstStartedPulling="2026-02-19 05:57:16.865277621 +0000 UTC m=+1932.898600190" lastFinishedPulling="2026-02-19 05:57:17.345637463 +0000 UTC m=+1933.378960072" observedRunningTime="2026-02-19 05:57:18.071423482 +0000 UTC m=+1934.104746051" watchObservedRunningTime="2026-02-19 05:57:18.083987551 +0000 UTC m=+1934.117310120" Feb 19 05:57:44 crc kubenswrapper[5012]: I0219 05:57:44.430642 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:57:44 crc kubenswrapper[5012]: I0219 05:57:44.431553 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:57:58 crc kubenswrapper[5012]: I0219 05:57:58.494137 5012 generic.go:334] "Generic (PLEG): container finished" podID="d869003b-7b03-4a8b-9f9c-73ca0ec4f359" containerID="3083a62009353315b9fc731e7060bf2fdf582bb6651774b6421dccef77849ffe" exitCode=0 Feb 19 05:57:58 crc kubenswrapper[5012]: I0219 05:57:58.494260 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" event={"ID":"d869003b-7b03-4a8b-9f9c-73ca0ec4f359","Type":"ContainerDied","Data":"3083a62009353315b9fc731e7060bf2fdf582bb6651774b6421dccef77849ffe"} Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.100358 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.162638 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.162728 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf9mg\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-kube-api-access-zf9mg\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.162787 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-bootstrap-combined-ca-bundle\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.162817 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-repo-setup-combined-ca-bundle\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.162895 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-neutron-metadata-combined-ca-bundle\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.162951 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ovn-combined-ca-bundle\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.162991 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-nova-combined-ca-bundle\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.163041 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-ovn-default-certs-0\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.163092 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-inventory\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.163124 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-telemetry-combined-ca-bundle\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.163188 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-libvirt-combined-ca-bundle\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.163228 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ssh-key-openstack-edpm-ipam\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.163339 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.163391 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\" (UID: \"d869003b-7b03-4a8b-9f9c-73ca0ec4f359\") " Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.173039 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.173216 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.173235 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-kube-api-access-zf9mg" (OuterVolumeSpecName: "kube-api-access-zf9mg") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "kube-api-access-zf9mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.180847 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.181220 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.181478 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.192581 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.192814 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.192658 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.192726 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.192924 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.194387 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.214040 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.219713 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-inventory" (OuterVolumeSpecName: "inventory") pod "d869003b-7b03-4a8b-9f9c-73ca0ec4f359" (UID: "d869003b-7b03-4a8b-9f9c-73ca0ec4f359"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265637 5012 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265675 5012 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265686 5012 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265698 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265707 5012 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265718 5012 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265727 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265736 5012 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265748 5012 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265758 5012 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265768 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf9mg\" (UniqueName: \"kubernetes.io/projected/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-kube-api-access-zf9mg\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265779 5012 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265788 5012 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.265799 5012 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d869003b-7b03-4a8b-9f9c-73ca0ec4f359-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.525515 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" event={"ID":"d869003b-7b03-4a8b-9f9c-73ca0ec4f359","Type":"ContainerDied","Data":"c79daf974f33d700f2f0838eecbe85e8cd1c1c0b3e0d9db46ea76aecfbdd9d4f"} Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.525564 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c79daf974f33d700f2f0838eecbe85e8cd1c1c0b3e0d9db46ea76aecfbdd9d4f" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.525601 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.752771 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx"] Feb 19 05:58:00 crc kubenswrapper[5012]: E0219 05:58:00.753532 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d869003b-7b03-4a8b-9f9c-73ca0ec4f359" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.753564 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d869003b-7b03-4a8b-9f9c-73ca0ec4f359" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.753910 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d869003b-7b03-4a8b-9f9c-73ca0ec4f359" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.755170 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.757890 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.763340 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.763665 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.763877 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.764034 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.766622 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx"] Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.882169 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5x75\" (UniqueName: \"kubernetes.io/projected/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-kube-api-access-z5x75\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.882282 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.882391 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.882492 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.882610 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.984206 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.984389 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.984558 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.984714 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.985434 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.985480 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5x75\" (UniqueName: \"kubernetes.io/projected/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-kube-api-access-z5x75\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.991424 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.992868 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:00 crc kubenswrapper[5012]: I0219 05:58:00.999858 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:01 crc kubenswrapper[5012]: I0219 05:58:01.005512 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5x75\" (UniqueName: \"kubernetes.io/projected/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-kube-api-access-z5x75\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gxxmx\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:01 crc kubenswrapper[5012]: I0219 05:58:01.087966 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:58:01 crc kubenswrapper[5012]: I0219 05:58:01.712228 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx"] Feb 19 05:58:01 crc kubenswrapper[5012]: W0219 05:58:01.716945 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7335769e_5b13_4d1b_8aa7_e7f192ee9e2b.slice/crio-51044e02dbb31421e2a2a301042a93613a0c26e6fdd9521b17f1f9f10c4eb731 WatchSource:0}: Error finding container 51044e02dbb31421e2a2a301042a93613a0c26e6fdd9521b17f1f9f10c4eb731: Status 404 returned error can't find the container with id 51044e02dbb31421e2a2a301042a93613a0c26e6fdd9521b17f1f9f10c4eb731 Feb 19 05:58:02 crc kubenswrapper[5012]: I0219 05:58:02.550049 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" event={"ID":"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b","Type":"ContainerStarted","Data":"51044e02dbb31421e2a2a301042a93613a0c26e6fdd9521b17f1f9f10c4eb731"} Feb 19 05:58:03 crc kubenswrapper[5012]: I0219 05:58:03.564295 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" event={"ID":"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b","Type":"ContainerStarted","Data":"f923d7786be4a9ab567db6de15be49bd354ff86095fbe61b564432c2dfb881d3"} Feb 19 05:58:03 crc kubenswrapper[5012]: I0219 05:58:03.587365 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" podStartSLOduration=3.080080611 podStartE2EDuration="3.587338961s" podCreationTimestamp="2026-02-19 05:58:00 +0000 UTC" firstStartedPulling="2026-02-19 05:58:01.719644135 +0000 UTC m=+1977.752966704" lastFinishedPulling="2026-02-19 05:58:02.226902455 +0000 UTC m=+1978.260225054" observedRunningTime="2026-02-19 05:58:03.582468211 +0000 UTC m=+1979.615790810" watchObservedRunningTime="2026-02-19 05:58:03.587338961 +0000 UTC m=+1979.620661570" Feb 19 05:58:14 crc kubenswrapper[5012]: I0219 05:58:14.430844 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 05:58:14 crc kubenswrapper[5012]: I0219 05:58:14.431368 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 05:58:14 crc kubenswrapper[5012]: I0219 05:58:14.431417 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 05:58:14 crc kubenswrapper[5012]: I0219 05:58:14.432222 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50740295b4ff1d8fcf9687906fffd0580ff7c4139e466c7a77580870ab679afe"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 05:58:14 crc kubenswrapper[5012]: I0219 05:58:14.432292 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://50740295b4ff1d8fcf9687906fffd0580ff7c4139e466c7a77580870ab679afe" gracePeriod=600 Feb 19 05:58:14 crc kubenswrapper[5012]: I0219 05:58:14.676092 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="50740295b4ff1d8fcf9687906fffd0580ff7c4139e466c7a77580870ab679afe" exitCode=0 Feb 19 05:58:14 crc kubenswrapper[5012]: I0219 05:58:14.676288 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"50740295b4ff1d8fcf9687906fffd0580ff7c4139e466c7a77580870ab679afe"} Feb 19 05:58:14 crc kubenswrapper[5012]: I0219 05:58:14.676385 5012 scope.go:117] "RemoveContainer" containerID="a17588b538515633f55a600b82148d0774327474a96b5467ae0aabedd5040d42" Feb 19 05:58:15 crc kubenswrapper[5012]: I0219 05:58:15.691941 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f"} Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.184086 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bmlpm"] Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.187547 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.210681 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bmlpm"] Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.251844 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-utilities\") pod \"redhat-operators-bmlpm\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.252481 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9x8j\" (UniqueName: \"kubernetes.io/projected/5db1fe46-364c-49e0-96a5-5f2deba8029b-kube-api-access-j9x8j\") pod \"redhat-operators-bmlpm\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.252546 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-catalog-content\") pod \"redhat-operators-bmlpm\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.356154 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-utilities\") pod \"redhat-operators-bmlpm\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.356251 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9x8j\" (UniqueName: \"kubernetes.io/projected/5db1fe46-364c-49e0-96a5-5f2deba8029b-kube-api-access-j9x8j\") pod \"redhat-operators-bmlpm\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.356330 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-catalog-content\") pod \"redhat-operators-bmlpm\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.356902 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-catalog-content\") pod \"redhat-operators-bmlpm\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.360903 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-utilities\") pod \"redhat-operators-bmlpm\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.379921 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9x8j\" (UniqueName: \"kubernetes.io/projected/5db1fe46-364c-49e0-96a5-5f2deba8029b-kube-api-access-j9x8j\") pod \"redhat-operators-bmlpm\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:44 crc kubenswrapper[5012]: I0219 05:58:44.510256 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:45 crc kubenswrapper[5012]: I0219 05:58:45.014871 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bmlpm"] Feb 19 05:58:45 crc kubenswrapper[5012]: I0219 05:58:45.103286 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmlpm" event={"ID":"5db1fe46-364c-49e0-96a5-5f2deba8029b","Type":"ContainerStarted","Data":"6abd3d316cec6f5cbf6e459a814eb5160689ca42d876e80b63aaf9e3233e6715"} Feb 19 05:58:46 crc kubenswrapper[5012]: I0219 05:58:46.111958 5012 generic.go:334] "Generic (PLEG): container finished" podID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerID="6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b" exitCode=0 Feb 19 05:58:46 crc kubenswrapper[5012]: I0219 05:58:46.112024 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmlpm" event={"ID":"5db1fe46-364c-49e0-96a5-5f2deba8029b","Type":"ContainerDied","Data":"6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b"} Feb 19 05:58:48 crc kubenswrapper[5012]: I0219 05:58:48.761979 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmlpm" event={"ID":"5db1fe46-364c-49e0-96a5-5f2deba8029b","Type":"ContainerStarted","Data":"b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3"} Feb 19 05:58:51 crc kubenswrapper[5012]: I0219 05:58:51.804353 5012 generic.go:334] "Generic (PLEG): container finished" podID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerID="b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3" exitCode=0 Feb 19 05:58:51 crc kubenswrapper[5012]: I0219 05:58:51.804476 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmlpm" event={"ID":"5db1fe46-364c-49e0-96a5-5f2deba8029b","Type":"ContainerDied","Data":"b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3"} Feb 19 05:58:52 crc kubenswrapper[5012]: I0219 05:58:52.823240 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmlpm" event={"ID":"5db1fe46-364c-49e0-96a5-5f2deba8029b","Type":"ContainerStarted","Data":"9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044"} Feb 19 05:58:52 crc kubenswrapper[5012]: I0219 05:58:52.852236 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bmlpm" podStartSLOduration=2.705265775 podStartE2EDuration="8.852210549s" podCreationTimestamp="2026-02-19 05:58:44 +0000 UTC" firstStartedPulling="2026-02-19 05:58:46.114903329 +0000 UTC m=+2022.148225898" lastFinishedPulling="2026-02-19 05:58:52.261848093 +0000 UTC m=+2028.295170672" observedRunningTime="2026-02-19 05:58:52.851479161 +0000 UTC m=+2028.884801760" watchObservedRunningTime="2026-02-19 05:58:52.852210549 +0000 UTC m=+2028.885533148" Feb 19 05:58:54 crc kubenswrapper[5012]: I0219 05:58:54.511531 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:54 crc kubenswrapper[5012]: I0219 05:58:54.511932 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:58:55 crc kubenswrapper[5012]: I0219 05:58:55.575351 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bmlpm" podUID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerName="registry-server" probeResult="failure" output=< Feb 19 05:58:55 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 05:58:55 crc kubenswrapper[5012]: > Feb 19 05:59:04 crc kubenswrapper[5012]: I0219 05:59:04.559473 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:59:04 crc kubenswrapper[5012]: I0219 05:59:04.603669 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:59:04 crc kubenswrapper[5012]: I0219 05:59:04.799503 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bmlpm"] Feb 19 05:59:05 crc kubenswrapper[5012]: I0219 05:59:05.972882 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bmlpm" podUID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerName="registry-server" containerID="cri-o://9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044" gracePeriod=2 Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.610690 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.724101 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-catalog-content\") pod \"5db1fe46-364c-49e0-96a5-5f2deba8029b\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.724202 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9x8j\" (UniqueName: \"kubernetes.io/projected/5db1fe46-364c-49e0-96a5-5f2deba8029b-kube-api-access-j9x8j\") pod \"5db1fe46-364c-49e0-96a5-5f2deba8029b\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.724247 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-utilities\") pod \"5db1fe46-364c-49e0-96a5-5f2deba8029b\" (UID: \"5db1fe46-364c-49e0-96a5-5f2deba8029b\") " Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.725985 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-utilities" (OuterVolumeSpecName: "utilities") pod "5db1fe46-364c-49e0-96a5-5f2deba8029b" (UID: "5db1fe46-364c-49e0-96a5-5f2deba8029b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.733517 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db1fe46-364c-49e0-96a5-5f2deba8029b-kube-api-access-j9x8j" (OuterVolumeSpecName: "kube-api-access-j9x8j") pod "5db1fe46-364c-49e0-96a5-5f2deba8029b" (UID: "5db1fe46-364c-49e0-96a5-5f2deba8029b"). InnerVolumeSpecName "kube-api-access-j9x8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.827437 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9x8j\" (UniqueName: \"kubernetes.io/projected/5db1fe46-364c-49e0-96a5-5f2deba8029b-kube-api-access-j9x8j\") on node \"crc\" DevicePath \"\"" Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.827484 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.896706 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5db1fe46-364c-49e0-96a5-5f2deba8029b" (UID: "5db1fe46-364c-49e0-96a5-5f2deba8029b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.928877 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5db1fe46-364c-49e0-96a5-5f2deba8029b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.989661 5012 generic.go:334] "Generic (PLEG): container finished" podID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerID="9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044" exitCode=0 Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.989730 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmlpm" event={"ID":"5db1fe46-364c-49e0-96a5-5f2deba8029b","Type":"ContainerDied","Data":"9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044"} Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.989796 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bmlpm" Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.989819 5012 scope.go:117] "RemoveContainer" containerID="9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044" Feb 19 05:59:06 crc kubenswrapper[5012]: I0219 05:59:06.989798 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bmlpm" event={"ID":"5db1fe46-364c-49e0-96a5-5f2deba8029b","Type":"ContainerDied","Data":"6abd3d316cec6f5cbf6e459a814eb5160689ca42d876e80b63aaf9e3233e6715"} Feb 19 05:59:07 crc kubenswrapper[5012]: I0219 05:59:07.043554 5012 scope.go:117] "RemoveContainer" containerID="b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3" Feb 19 05:59:07 crc kubenswrapper[5012]: I0219 05:59:07.046245 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bmlpm"] Feb 19 05:59:07 crc kubenswrapper[5012]: I0219 05:59:07.067347 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bmlpm"] Feb 19 05:59:07 crc kubenswrapper[5012]: I0219 05:59:07.072007 5012 scope.go:117] "RemoveContainer" containerID="6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b" Feb 19 05:59:07 crc kubenswrapper[5012]: I0219 05:59:07.139027 5012 scope.go:117] "RemoveContainer" containerID="9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044" Feb 19 05:59:07 crc kubenswrapper[5012]: E0219 05:59:07.139611 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044\": container with ID starting with 9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044 not found: ID does not exist" containerID="9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044" Feb 19 05:59:07 crc kubenswrapper[5012]: I0219 05:59:07.139662 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044"} err="failed to get container status \"9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044\": rpc error: code = NotFound desc = could not find container \"9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044\": container with ID starting with 9c6e9606d2525b510dbb1ee0b4a6651dc86d29352cf27fdecff880abb6be3044 not found: ID does not exist" Feb 19 05:59:07 crc kubenswrapper[5012]: I0219 05:59:07.139698 5012 scope.go:117] "RemoveContainer" containerID="b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3" Feb 19 05:59:07 crc kubenswrapper[5012]: E0219 05:59:07.140084 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3\": container with ID starting with b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3 not found: ID does not exist" containerID="b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3" Feb 19 05:59:07 crc kubenswrapper[5012]: I0219 05:59:07.140130 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3"} err="failed to get container status \"b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3\": rpc error: code = NotFound desc = could not find container \"b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3\": container with ID starting with b3997dad9d164883556a461f9946e7d738f7536ff8a0c9d9fe1579570b464ae3 not found: ID does not exist" Feb 19 05:59:07 crc kubenswrapper[5012]: I0219 05:59:07.140156 5012 scope.go:117] "RemoveContainer" containerID="6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b" Feb 19 05:59:07 crc kubenswrapper[5012]: E0219 05:59:07.140522 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b\": container with ID starting with 6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b not found: ID does not exist" containerID="6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b" Feb 19 05:59:07 crc kubenswrapper[5012]: I0219 05:59:07.140610 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b"} err="failed to get container status \"6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b\": rpc error: code = NotFound desc = could not find container \"6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b\": container with ID starting with 6a1c5e5c03eb06538652adea7686116b76f912bf38923e444d49134aa4f0a89b not found: ID does not exist" Feb 19 05:59:08 crc kubenswrapper[5012]: I0219 05:59:08.722454 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5db1fe46-364c-49e0-96a5-5f2deba8029b" path="/var/lib/kubelet/pods/5db1fe46-364c-49e0-96a5-5f2deba8029b/volumes" Feb 19 05:59:15 crc kubenswrapper[5012]: I0219 05:59:15.093097 5012 generic.go:334] "Generic (PLEG): container finished" podID="7335769e-5b13-4d1b-8aa7-e7f192ee9e2b" containerID="f923d7786be4a9ab567db6de15be49bd354ff86095fbe61b564432c2dfb881d3" exitCode=0 Feb 19 05:59:15 crc kubenswrapper[5012]: I0219 05:59:15.093206 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" event={"ID":"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b","Type":"ContainerDied","Data":"f923d7786be4a9ab567db6de15be49bd354ff86095fbe61b564432c2dfb881d3"} Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.695847 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.761269 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ssh-key-openstack-edpm-ipam\") pod \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.761491 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-inventory\") pod \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.761586 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5x75\" (UniqueName: \"kubernetes.io/projected/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-kube-api-access-z5x75\") pod \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.761633 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovn-combined-ca-bundle\") pod \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.761677 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovncontroller-config-0\") pod \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\" (UID: \"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b\") " Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.771220 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7335769e-5b13-4d1b-8aa7-e7f192ee9e2b" (UID: "7335769e-5b13-4d1b-8aa7-e7f192ee9e2b"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.771477 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-kube-api-access-z5x75" (OuterVolumeSpecName: "kube-api-access-z5x75") pod "7335769e-5b13-4d1b-8aa7-e7f192ee9e2b" (UID: "7335769e-5b13-4d1b-8aa7-e7f192ee9e2b"). InnerVolumeSpecName "kube-api-access-z5x75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.804794 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "7335769e-5b13-4d1b-8aa7-e7f192ee9e2b" (UID: "7335769e-5b13-4d1b-8aa7-e7f192ee9e2b"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.810297 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-inventory" (OuterVolumeSpecName: "inventory") pod "7335769e-5b13-4d1b-8aa7-e7f192ee9e2b" (UID: "7335769e-5b13-4d1b-8aa7-e7f192ee9e2b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.816166 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7335769e-5b13-4d1b-8aa7-e7f192ee9e2b" (UID: "7335769e-5b13-4d1b-8aa7-e7f192ee9e2b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.864771 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.864935 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.865026 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5x75\" (UniqueName: \"kubernetes.io/projected/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-kube-api-access-z5x75\") on node \"crc\" DevicePath \"\"" Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.865111 5012 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 05:59:16 crc kubenswrapper[5012]: I0219 05:59:16.865195 5012 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7335769e-5b13-4d1b-8aa7-e7f192ee9e2b-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.114590 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" event={"ID":"7335769e-5b13-4d1b-8aa7-e7f192ee9e2b","Type":"ContainerDied","Data":"51044e02dbb31421e2a2a301042a93613a0c26e6fdd9521b17f1f9f10c4eb731"} Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.114829 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51044e02dbb31421e2a2a301042a93613a0c26e6fdd9521b17f1f9f10c4eb731" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.114663 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gxxmx" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.228408 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2"] Feb 19 05:59:17 crc kubenswrapper[5012]: E0219 05:59:17.228823 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerName="registry-server" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.228840 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerName="registry-server" Feb 19 05:59:17 crc kubenswrapper[5012]: E0219 05:59:17.228857 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerName="extract-utilities" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.228863 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerName="extract-utilities" Feb 19 05:59:17 crc kubenswrapper[5012]: E0219 05:59:17.228882 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7335769e-5b13-4d1b-8aa7-e7f192ee9e2b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.228889 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="7335769e-5b13-4d1b-8aa7-e7f192ee9e2b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 05:59:17 crc kubenswrapper[5012]: E0219 05:59:17.228900 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerName="extract-content" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.228908 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerName="extract-content" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.229117 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="7335769e-5b13-4d1b-8aa7-e7f192ee9e2b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.229129 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db1fe46-364c-49e0-96a5-5f2deba8029b" containerName="registry-server" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.229839 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.232110 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.232328 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.232586 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.232768 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.232949 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.234832 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.240523 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2"] Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.272413 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.272464 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xstqg\" (UniqueName: \"kubernetes.io/projected/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-kube-api-access-xstqg\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.272494 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.272639 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.272768 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.272818 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.374828 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.374935 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xstqg\" (UniqueName: \"kubernetes.io/projected/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-kube-api-access-xstqg\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.374989 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.375062 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.375182 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.375228 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.381432 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.382438 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.383148 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.385386 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.390216 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.396105 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xstqg\" (UniqueName: \"kubernetes.io/projected/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-kube-api-access-xstqg\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:17 crc kubenswrapper[5012]: I0219 05:59:17.568019 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 05:59:18 crc kubenswrapper[5012]: W0219 05:59:18.146442 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod534720dc_6ff8_4fdc_9337_6fe77ad1eaa8.slice/crio-d8e3d3b95bc18dd0efc3e5005a2b6653906ccf976588952e1eb773f69f99d7e8 WatchSource:0}: Error finding container d8e3d3b95bc18dd0efc3e5005a2b6653906ccf976588952e1eb773f69f99d7e8: Status 404 returned error can't find the container with id d8e3d3b95bc18dd0efc3e5005a2b6653906ccf976588952e1eb773f69f99d7e8 Feb 19 05:59:18 crc kubenswrapper[5012]: I0219 05:59:18.151541 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2"] Feb 19 05:59:19 crc kubenswrapper[5012]: I0219 05:59:19.138444 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" event={"ID":"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8","Type":"ContainerStarted","Data":"a8c9ad25ba00cba89d94c38e3c88674ed58044e807c2ddd9b18cd3c2ad5f8504"} Feb 19 05:59:19 crc kubenswrapper[5012]: I0219 05:59:19.138811 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" event={"ID":"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8","Type":"ContainerStarted","Data":"d8e3d3b95bc18dd0efc3e5005a2b6653906ccf976588952e1eb773f69f99d7e8"} Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.152412 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" podStartSLOduration=42.687807216 podStartE2EDuration="43.152393589s" podCreationTimestamp="2026-02-19 05:59:17 +0000 UTC" firstStartedPulling="2026-02-19 05:59:18.148645428 +0000 UTC m=+2054.181967997" lastFinishedPulling="2026-02-19 05:59:18.613231761 +0000 UTC m=+2054.646554370" observedRunningTime="2026-02-19 05:59:19.163792992 +0000 UTC m=+2055.197115611" watchObservedRunningTime="2026-02-19 06:00:00.152393589 +0000 UTC m=+2096.185716168" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.163032 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8"] Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.164963 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.168461 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.168946 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.181781 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8"] Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.276165 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-secret-volume\") pod \"collect-profiles-29524680-7g7p8\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.276271 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72p4v\" (UniqueName: \"kubernetes.io/projected/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-kube-api-access-72p4v\") pod \"collect-profiles-29524680-7g7p8\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.276882 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-config-volume\") pod \"collect-profiles-29524680-7g7p8\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.378969 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-config-volume\") pod \"collect-profiles-29524680-7g7p8\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.379041 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-secret-volume\") pod \"collect-profiles-29524680-7g7p8\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.379100 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72p4v\" (UniqueName: \"kubernetes.io/projected/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-kube-api-access-72p4v\") pod \"collect-profiles-29524680-7g7p8\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.380720 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-config-volume\") pod \"collect-profiles-29524680-7g7p8\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.389453 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-secret-volume\") pod \"collect-profiles-29524680-7g7p8\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.404251 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72p4v\" (UniqueName: \"kubernetes.io/projected/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-kube-api-access-72p4v\") pod \"collect-profiles-29524680-7g7p8\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:00 crc kubenswrapper[5012]: I0219 06:00:00.490910 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:01 crc kubenswrapper[5012]: I0219 06:00:01.029609 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8"] Feb 19 06:00:01 crc kubenswrapper[5012]: W0219 06:00:01.034635 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5bdc022_3d70_4d4d_8f03_f2cf8b295a7e.slice/crio-05827d7776e4bd92d74a5b97b10bfc4be5b2f23c72eb78074e033a996dd586af WatchSource:0}: Error finding container 05827d7776e4bd92d74a5b97b10bfc4be5b2f23c72eb78074e033a996dd586af: Status 404 returned error can't find the container with id 05827d7776e4bd92d74a5b97b10bfc4be5b2f23c72eb78074e033a996dd586af Feb 19 06:00:01 crc kubenswrapper[5012]: I0219 06:00:01.616739 5012 generic.go:334] "Generic (PLEG): container finished" podID="f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e" containerID="aab8c26b7c272ad359e2397dc4c5c133e04f23474846a5322b643a2a4fdad8bd" exitCode=0 Feb 19 06:00:01 crc kubenswrapper[5012]: I0219 06:00:01.616852 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" event={"ID":"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e","Type":"ContainerDied","Data":"aab8c26b7c272ad359e2397dc4c5c133e04f23474846a5322b643a2a4fdad8bd"} Feb 19 06:00:01 crc kubenswrapper[5012]: I0219 06:00:01.617493 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" event={"ID":"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e","Type":"ContainerStarted","Data":"05827d7776e4bd92d74a5b97b10bfc4be5b2f23c72eb78074e033a996dd586af"} Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.077212 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.135710 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-config-volume\") pod \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.135867 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72p4v\" (UniqueName: \"kubernetes.io/projected/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-kube-api-access-72p4v\") pod \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.136080 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-secret-volume\") pod \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\" (UID: \"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e\") " Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.136792 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-config-volume" (OuterVolumeSpecName: "config-volume") pod "f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e" (UID: "f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.137291 5012 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.141575 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-kube-api-access-72p4v" (OuterVolumeSpecName: "kube-api-access-72p4v") pod "f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e" (UID: "f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e"). InnerVolumeSpecName "kube-api-access-72p4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.144437 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e" (UID: "f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.240477 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72p4v\" (UniqueName: \"kubernetes.io/projected/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-kube-api-access-72p4v\") on node \"crc\" DevicePath \"\"" Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.240542 5012 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.644026 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" event={"ID":"f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e","Type":"ContainerDied","Data":"05827d7776e4bd92d74a5b97b10bfc4be5b2f23c72eb78074e033a996dd586af"} Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.644065 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05827d7776e4bd92d74a5b97b10bfc4be5b2f23c72eb78074e033a996dd586af" Feb 19 06:00:03 crc kubenswrapper[5012]: I0219 06:00:03.644108 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8" Feb 19 06:00:04 crc kubenswrapper[5012]: I0219 06:00:04.180203 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6"] Feb 19 06:00:04 crc kubenswrapper[5012]: I0219 06:00:04.194420 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524635-psnb6"] Feb 19 06:00:04 crc kubenswrapper[5012]: I0219 06:00:04.724609 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46582f7f-c6b0-4ae3-9103-4a4754304438" path="/var/lib/kubelet/pods/46582f7f-c6b0-4ae3-9103-4a4754304438/volumes" Feb 19 06:00:11 crc kubenswrapper[5012]: E0219 06:00:11.684184 5012 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod534720dc_6ff8_4fdc_9337_6fe77ad1eaa8.slice/crio-a8c9ad25ba00cba89d94c38e3c88674ed58044e807c2ddd9b18cd3c2ad5f8504.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod534720dc_6ff8_4fdc_9337_6fe77ad1eaa8.slice/crio-conmon-a8c9ad25ba00cba89d94c38e3c88674ed58044e807c2ddd9b18cd3c2ad5f8504.scope\": RecentStats: unable to find data in memory cache]" Feb 19 06:00:11 crc kubenswrapper[5012]: I0219 06:00:11.742972 5012 generic.go:334] "Generic (PLEG): container finished" podID="534720dc-6ff8-4fdc-9337-6fe77ad1eaa8" containerID="a8c9ad25ba00cba89d94c38e3c88674ed58044e807c2ddd9b18cd3c2ad5f8504" exitCode=0 Feb 19 06:00:11 crc kubenswrapper[5012]: I0219 06:00:11.743028 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" event={"ID":"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8","Type":"ContainerDied","Data":"a8c9ad25ba00cba89d94c38e3c88674ed58044e807c2ddd9b18cd3c2ad5f8504"} Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.357787 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.393075 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-nova-metadata-neutron-config-0\") pod \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.393240 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-ssh-key-openstack-edpm-ipam\") pod \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.393336 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-metadata-combined-ca-bundle\") pod \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.393421 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xstqg\" (UniqueName: \"kubernetes.io/projected/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-kube-api-access-xstqg\") pod \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.393518 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.394775 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-inventory\") pod \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\" (UID: \"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8\") " Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.429539 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8" (UID: "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.471538 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-kube-api-access-xstqg" (OuterVolumeSpecName: "kube-api-access-xstqg") pod "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8" (UID: "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8"). InnerVolumeSpecName "kube-api-access-xstqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.483481 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-inventory" (OuterVolumeSpecName: "inventory") pod "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8" (UID: "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.497638 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xstqg\" (UniqueName: \"kubernetes.io/projected/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-kube-api-access-xstqg\") on node \"crc\" DevicePath \"\"" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.497666 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.497676 5012 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.514517 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8" (UID: "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.522580 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8" (UID: "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.525529 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8" (UID: "534720dc-6ff8-4fdc-9337-6fe77ad1eaa8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.599396 5012 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.599428 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.599440 5012 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/534720dc-6ff8-4fdc-9337-6fe77ad1eaa8-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.782686 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" event={"ID":"534720dc-6ff8-4fdc-9337-6fe77ad1eaa8","Type":"ContainerDied","Data":"d8e3d3b95bc18dd0efc3e5005a2b6653906ccf976588952e1eb773f69f99d7e8"} Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.782726 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8e3d3b95bc18dd0efc3e5005a2b6653906ccf976588952e1eb773f69f99d7e8" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.782777 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.882776 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s"] Feb 19 06:00:13 crc kubenswrapper[5012]: E0219 06:00:13.883146 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="534720dc-6ff8-4fdc-9337-6fe77ad1eaa8" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.883168 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="534720dc-6ff8-4fdc-9337-6fe77ad1eaa8" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 06:00:13 crc kubenswrapper[5012]: E0219 06:00:13.883184 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e" containerName="collect-profiles" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.883193 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e" containerName="collect-profiles" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.883502 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e" containerName="collect-profiles" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.883522 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="534720dc-6ff8-4fdc-9337-6fe77ad1eaa8" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.884250 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.886455 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.886801 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.886985 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.891155 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.891229 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 06:00:13 crc kubenswrapper[5012]: I0219 06:00:13.918837 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s"] Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.012603 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g9n7\" (UniqueName: \"kubernetes.io/projected/fcace677-35b0-499f-998c-99168fbfa0af-kube-api-access-6g9n7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.012910 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.012989 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.013234 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.013325 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.115625 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.115668 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.115748 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.115768 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.115807 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g9n7\" (UniqueName: \"kubernetes.io/projected/fcace677-35b0-499f-998c-99168fbfa0af-kube-api-access-6g9n7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.119917 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.120189 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.120209 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.124527 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.145620 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g9n7\" (UniqueName: \"kubernetes.io/projected/fcace677-35b0-499f-998c-99168fbfa0af-kube-api-access-6g9n7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-2n79s\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.211358 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.430792 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.431098 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:00:14 crc kubenswrapper[5012]: I0219 06:00:14.800055 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s"] Feb 19 06:00:15 crc kubenswrapper[5012]: I0219 06:00:15.810463 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" event={"ID":"fcace677-35b0-499f-998c-99168fbfa0af","Type":"ContainerStarted","Data":"ead8d5cbfbadc07cdc6949287d7eaad0d3adb71e861dbd504d482651d9e45f96"} Feb 19 06:00:15 crc kubenswrapper[5012]: I0219 06:00:15.810887 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" event={"ID":"fcace677-35b0-499f-998c-99168fbfa0af","Type":"ContainerStarted","Data":"845fa55489eb1ebebf023adf297c3cff09eae6d31e26dd2248e57ae7baeee857"} Feb 19 06:00:15 crc kubenswrapper[5012]: I0219 06:00:15.840042 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" podStartSLOduration=2.410188323 podStartE2EDuration="2.840028076s" podCreationTimestamp="2026-02-19 06:00:13 +0000 UTC" firstStartedPulling="2026-02-19 06:00:14.805726272 +0000 UTC m=+2110.839048881" lastFinishedPulling="2026-02-19 06:00:15.235566055 +0000 UTC m=+2111.268888634" observedRunningTime="2026-02-19 06:00:15.836590352 +0000 UTC m=+2111.869912921" watchObservedRunningTime="2026-02-19 06:00:15.840028076 +0000 UTC m=+2111.873350635" Feb 19 06:00:36 crc kubenswrapper[5012]: I0219 06:00:36.820985 5012 scope.go:117] "RemoveContainer" containerID="6ecd18e5cbbb471f815af478d67f7066d4c1bb34788cd0f8db72ff1fe8b502b7" Feb 19 06:00:44 crc kubenswrapper[5012]: I0219 06:00:44.430939 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:00:44 crc kubenswrapper[5012]: I0219 06:00:44.431518 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.164612 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29524681-x9bcr"] Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.167681 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.176967 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524681-x9bcr"] Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.291395 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bvq5\" (UniqueName: \"kubernetes.io/projected/86c7e36d-88e3-432a-ad6f-74de626c5f30-kube-api-access-9bvq5\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.291530 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-config-data\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.291574 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-combined-ca-bundle\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.291802 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-fernet-keys\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.393733 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-combined-ca-bundle\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.393859 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-fernet-keys\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.393986 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bvq5\" (UniqueName: \"kubernetes.io/projected/86c7e36d-88e3-432a-ad6f-74de626c5f30-kube-api-access-9bvq5\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.394065 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-config-data\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.402460 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-config-data\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.406992 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-fernet-keys\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.416546 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-combined-ca-bundle\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.417004 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bvq5\" (UniqueName: \"kubernetes.io/projected/86c7e36d-88e3-432a-ad6f-74de626c5f30-kube-api-access-9bvq5\") pod \"keystone-cron-29524681-x9bcr\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:00 crc kubenswrapper[5012]: I0219 06:01:00.501750 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:01 crc kubenswrapper[5012]: I0219 06:01:01.024227 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524681-x9bcr"] Feb 19 06:01:01 crc kubenswrapper[5012]: I0219 06:01:01.322249 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524681-x9bcr" event={"ID":"86c7e36d-88e3-432a-ad6f-74de626c5f30","Type":"ContainerStarted","Data":"f73e2bce52b5a24470ca1d0bb1435f7e6c5323b6116485a81bc0973e84e9a11b"} Feb 19 06:01:01 crc kubenswrapper[5012]: I0219 06:01:01.322703 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524681-x9bcr" event={"ID":"86c7e36d-88e3-432a-ad6f-74de626c5f30","Type":"ContainerStarted","Data":"dd43f2b3a6e18b48ba0032f1d4b181d9e199dbb997ac344570a32216b4d8f020"} Feb 19 06:01:01 crc kubenswrapper[5012]: I0219 06:01:01.355368 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29524681-x9bcr" podStartSLOduration=1.3553444049999999 podStartE2EDuration="1.355344405s" podCreationTimestamp="2026-02-19 06:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 06:01:01.345887123 +0000 UTC m=+2157.379209702" watchObservedRunningTime="2026-02-19 06:01:01.355344405 +0000 UTC m=+2157.388667014" Feb 19 06:01:04 crc kubenswrapper[5012]: I0219 06:01:04.356418 5012 generic.go:334] "Generic (PLEG): container finished" podID="86c7e36d-88e3-432a-ad6f-74de626c5f30" containerID="f73e2bce52b5a24470ca1d0bb1435f7e6c5323b6116485a81bc0973e84e9a11b" exitCode=0 Feb 19 06:01:04 crc kubenswrapper[5012]: I0219 06:01:04.356505 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524681-x9bcr" event={"ID":"86c7e36d-88e3-432a-ad6f-74de626c5f30","Type":"ContainerDied","Data":"f73e2bce52b5a24470ca1d0bb1435f7e6c5323b6116485a81bc0973e84e9a11b"} Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.771012 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.859868 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-fernet-keys\") pod \"86c7e36d-88e3-432a-ad6f-74de626c5f30\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.860040 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-combined-ca-bundle\") pod \"86c7e36d-88e3-432a-ad6f-74de626c5f30\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.860181 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bvq5\" (UniqueName: \"kubernetes.io/projected/86c7e36d-88e3-432a-ad6f-74de626c5f30-kube-api-access-9bvq5\") pod \"86c7e36d-88e3-432a-ad6f-74de626c5f30\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.860227 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-config-data\") pod \"86c7e36d-88e3-432a-ad6f-74de626c5f30\" (UID: \"86c7e36d-88e3-432a-ad6f-74de626c5f30\") " Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.866893 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86c7e36d-88e3-432a-ad6f-74de626c5f30-kube-api-access-9bvq5" (OuterVolumeSpecName: "kube-api-access-9bvq5") pod "86c7e36d-88e3-432a-ad6f-74de626c5f30" (UID: "86c7e36d-88e3-432a-ad6f-74de626c5f30"). InnerVolumeSpecName "kube-api-access-9bvq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.867701 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "86c7e36d-88e3-432a-ad6f-74de626c5f30" (UID: "86c7e36d-88e3-432a-ad6f-74de626c5f30"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.903194 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86c7e36d-88e3-432a-ad6f-74de626c5f30" (UID: "86c7e36d-88e3-432a-ad6f-74de626c5f30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.931344 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-config-data" (OuterVolumeSpecName: "config-data") pod "86c7e36d-88e3-432a-ad6f-74de626c5f30" (UID: "86c7e36d-88e3-432a-ad6f-74de626c5f30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.963711 5012 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.964084 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.964133 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bvq5\" (UniqueName: \"kubernetes.io/projected/86c7e36d-88e3-432a-ad6f-74de626c5f30-kube-api-access-9bvq5\") on node \"crc\" DevicePath \"\"" Feb 19 06:01:05 crc kubenswrapper[5012]: I0219 06:01:05.964150 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c7e36d-88e3-432a-ad6f-74de626c5f30-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 06:01:06 crc kubenswrapper[5012]: I0219 06:01:06.388184 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524681-x9bcr" event={"ID":"86c7e36d-88e3-432a-ad6f-74de626c5f30","Type":"ContainerDied","Data":"dd43f2b3a6e18b48ba0032f1d4b181d9e199dbb997ac344570a32216b4d8f020"} Feb 19 06:01:06 crc kubenswrapper[5012]: I0219 06:01:06.388525 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd43f2b3a6e18b48ba0032f1d4b181d9e199dbb997ac344570a32216b4d8f020" Feb 19 06:01:06 crc kubenswrapper[5012]: I0219 06:01:06.388417 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524681-x9bcr" Feb 19 06:01:14 crc kubenswrapper[5012]: I0219 06:01:14.431105 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:01:14 crc kubenswrapper[5012]: I0219 06:01:14.431924 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:01:14 crc kubenswrapper[5012]: I0219 06:01:14.432009 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:01:14 crc kubenswrapper[5012]: I0219 06:01:14.433233 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:01:14 crc kubenswrapper[5012]: I0219 06:01:14.433398 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" gracePeriod=600 Feb 19 06:01:14 crc kubenswrapper[5012]: E0219 06:01:14.560752 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:01:15 crc kubenswrapper[5012]: I0219 06:01:15.509959 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" exitCode=0 Feb 19 06:01:15 crc kubenswrapper[5012]: I0219 06:01:15.510027 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f"} Feb 19 06:01:15 crc kubenswrapper[5012]: I0219 06:01:15.510438 5012 scope.go:117] "RemoveContainer" containerID="50740295b4ff1d8fcf9687906fffd0580ff7c4139e466c7a77580870ab679afe" Feb 19 06:01:15 crc kubenswrapper[5012]: I0219 06:01:15.511134 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:01:15 crc kubenswrapper[5012]: E0219 06:01:15.511733 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:01:26 crc kubenswrapper[5012]: I0219 06:01:26.704125 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:01:26 crc kubenswrapper[5012]: E0219 06:01:26.705264 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:01:36 crc kubenswrapper[5012]: I0219 06:01:36.906937 5012 scope.go:117] "RemoveContainer" containerID="88705a5b47e877865905bfec0d79a37661c7afd39bd29b3b62dcb301a3a591e6" Feb 19 06:01:36 crc kubenswrapper[5012]: I0219 06:01:36.939193 5012 scope.go:117] "RemoveContainer" containerID="f2c73daa7912b8b42ff12ed6bf21505d1239d1f38a626a18bd4a378076264990" Feb 19 06:01:37 crc kubenswrapper[5012]: I0219 06:01:37.030251 5012 scope.go:117] "RemoveContainer" containerID="7a7a28b9019ae634e7419610ad5d6e6779acece549fd96ddb1633a5dbbf4b985" Feb 19 06:01:38 crc kubenswrapper[5012]: I0219 06:01:38.704002 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:01:38 crc kubenswrapper[5012]: E0219 06:01:38.704938 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:01:39 crc kubenswrapper[5012]: I0219 06:01:39.761724 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9nqr4"] Feb 19 06:01:39 crc kubenswrapper[5012]: E0219 06:01:39.762527 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86c7e36d-88e3-432a-ad6f-74de626c5f30" containerName="keystone-cron" Feb 19 06:01:39 crc kubenswrapper[5012]: I0219 06:01:39.762558 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c7e36d-88e3-432a-ad6f-74de626c5f30" containerName="keystone-cron" Feb 19 06:01:39 crc kubenswrapper[5012]: I0219 06:01:39.763124 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="86c7e36d-88e3-432a-ad6f-74de626c5f30" containerName="keystone-cron" Feb 19 06:01:39 crc kubenswrapper[5012]: I0219 06:01:39.766976 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:39 crc kubenswrapper[5012]: I0219 06:01:39.789698 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9nqr4"] Feb 19 06:01:39 crc kubenswrapper[5012]: I0219 06:01:39.957216 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bt4l\" (UniqueName: \"kubernetes.io/projected/d888e883-8262-4386-b91a-14e87cd7fed3-kube-api-access-9bt4l\") pod \"certified-operators-9nqr4\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:39 crc kubenswrapper[5012]: I0219 06:01:39.957265 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-catalog-content\") pod \"certified-operators-9nqr4\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:39 crc kubenswrapper[5012]: I0219 06:01:39.957447 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-utilities\") pod \"certified-operators-9nqr4\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.058855 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-utilities\") pod \"certified-operators-9nqr4\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.059385 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-utilities\") pod \"certified-operators-9nqr4\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.059647 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bt4l\" (UniqueName: \"kubernetes.io/projected/d888e883-8262-4386-b91a-14e87cd7fed3-kube-api-access-9bt4l\") pod \"certified-operators-9nqr4\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.059733 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-catalog-content\") pod \"certified-operators-9nqr4\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.060241 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-catalog-content\") pod \"certified-operators-9nqr4\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.081024 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bt4l\" (UniqueName: \"kubernetes.io/projected/d888e883-8262-4386-b91a-14e87cd7fed3-kube-api-access-9bt4l\") pod \"certified-operators-9nqr4\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.102146 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.613885 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9nqr4"] Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.865017 5012 generic.go:334] "Generic (PLEG): container finished" podID="d888e883-8262-4386-b91a-14e87cd7fed3" containerID="90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6" exitCode=0 Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.865085 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nqr4" event={"ID":"d888e883-8262-4386-b91a-14e87cd7fed3","Type":"ContainerDied","Data":"90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6"} Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.865405 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nqr4" event={"ID":"d888e883-8262-4386-b91a-14e87cd7fed3","Type":"ContainerStarted","Data":"d4f572a48e3284c87552e4d72878660194d42b6a59b07ece38f080eec80bec4c"} Feb 19 06:01:40 crc kubenswrapper[5012]: I0219 06:01:40.867404 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 06:01:41 crc kubenswrapper[5012]: I0219 06:01:41.883490 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nqr4" event={"ID":"d888e883-8262-4386-b91a-14e87cd7fed3","Type":"ContainerStarted","Data":"9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7"} Feb 19 06:01:42 crc kubenswrapper[5012]: I0219 06:01:42.900199 5012 generic.go:334] "Generic (PLEG): container finished" podID="d888e883-8262-4386-b91a-14e87cd7fed3" containerID="9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7" exitCode=0 Feb 19 06:01:42 crc kubenswrapper[5012]: I0219 06:01:42.900274 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nqr4" event={"ID":"d888e883-8262-4386-b91a-14e87cd7fed3","Type":"ContainerDied","Data":"9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7"} Feb 19 06:01:43 crc kubenswrapper[5012]: I0219 06:01:43.912370 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nqr4" event={"ID":"d888e883-8262-4386-b91a-14e87cd7fed3","Type":"ContainerStarted","Data":"4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1"} Feb 19 06:01:43 crc kubenswrapper[5012]: I0219 06:01:43.956731 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9nqr4" podStartSLOduration=2.529690358 podStartE2EDuration="4.956706069s" podCreationTimestamp="2026-02-19 06:01:39 +0000 UTC" firstStartedPulling="2026-02-19 06:01:40.867091301 +0000 UTC m=+2196.900413880" lastFinishedPulling="2026-02-19 06:01:43.294106992 +0000 UTC m=+2199.327429591" observedRunningTime="2026-02-19 06:01:43.950593299 +0000 UTC m=+2199.983915878" watchObservedRunningTime="2026-02-19 06:01:43.956706069 +0000 UTC m=+2199.990028648" Feb 19 06:01:50 crc kubenswrapper[5012]: I0219 06:01:50.103380 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:50 crc kubenswrapper[5012]: I0219 06:01:50.103976 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:50 crc kubenswrapper[5012]: I0219 06:01:50.179028 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:51 crc kubenswrapper[5012]: I0219 06:01:51.056651 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:51 crc kubenswrapper[5012]: I0219 06:01:51.108025 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9nqr4"] Feb 19 06:01:52 crc kubenswrapper[5012]: I0219 06:01:52.703167 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:01:52 crc kubenswrapper[5012]: E0219 06:01:52.704011 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.031436 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9nqr4" podUID="d888e883-8262-4386-b91a-14e87cd7fed3" containerName="registry-server" containerID="cri-o://4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1" gracePeriod=2 Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.619158 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.710184 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bt4l\" (UniqueName: \"kubernetes.io/projected/d888e883-8262-4386-b91a-14e87cd7fed3-kube-api-access-9bt4l\") pod \"d888e883-8262-4386-b91a-14e87cd7fed3\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.710292 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-utilities\") pod \"d888e883-8262-4386-b91a-14e87cd7fed3\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.710388 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-catalog-content\") pod \"d888e883-8262-4386-b91a-14e87cd7fed3\" (UID: \"d888e883-8262-4386-b91a-14e87cd7fed3\") " Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.711890 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-utilities" (OuterVolumeSpecName: "utilities") pod "d888e883-8262-4386-b91a-14e87cd7fed3" (UID: "d888e883-8262-4386-b91a-14e87cd7fed3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.712620 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.717384 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d888e883-8262-4386-b91a-14e87cd7fed3-kube-api-access-9bt4l" (OuterVolumeSpecName: "kube-api-access-9bt4l") pod "d888e883-8262-4386-b91a-14e87cd7fed3" (UID: "d888e883-8262-4386-b91a-14e87cd7fed3"). InnerVolumeSpecName "kube-api-access-9bt4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.780833 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d888e883-8262-4386-b91a-14e87cd7fed3" (UID: "d888e883-8262-4386-b91a-14e87cd7fed3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.815717 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bt4l\" (UniqueName: \"kubernetes.io/projected/d888e883-8262-4386-b91a-14e87cd7fed3-kube-api-access-9bt4l\") on node \"crc\" DevicePath \"\"" Feb 19 06:01:53 crc kubenswrapper[5012]: I0219 06:01:53.815773 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d888e883-8262-4386-b91a-14e87cd7fed3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.046052 5012 generic.go:334] "Generic (PLEG): container finished" podID="d888e883-8262-4386-b91a-14e87cd7fed3" containerID="4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1" exitCode=0 Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.046160 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nqr4" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.046182 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nqr4" event={"ID":"d888e883-8262-4386-b91a-14e87cd7fed3","Type":"ContainerDied","Data":"4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1"} Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.046236 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nqr4" event={"ID":"d888e883-8262-4386-b91a-14e87cd7fed3","Type":"ContainerDied","Data":"d4f572a48e3284c87552e4d72878660194d42b6a59b07ece38f080eec80bec4c"} Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.046344 5012 scope.go:117] "RemoveContainer" containerID="4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.089774 5012 scope.go:117] "RemoveContainer" containerID="9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.091241 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9nqr4"] Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.103370 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9nqr4"] Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.124168 5012 scope.go:117] "RemoveContainer" containerID="90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.194858 5012 scope.go:117] "RemoveContainer" containerID="4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1" Feb 19 06:01:54 crc kubenswrapper[5012]: E0219 06:01:54.195372 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1\": container with ID starting with 4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1 not found: ID does not exist" containerID="4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.195403 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1"} err="failed to get container status \"4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1\": rpc error: code = NotFound desc = could not find container \"4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1\": container with ID starting with 4078567acc5b0b5f9bd773bb65d67aceceac6bc2e2f4bfbd05ba4622ee9580b1 not found: ID does not exist" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.195423 5012 scope.go:117] "RemoveContainer" containerID="9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7" Feb 19 06:01:54 crc kubenswrapper[5012]: E0219 06:01:54.195807 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7\": container with ID starting with 9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7 not found: ID does not exist" containerID="9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.195850 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7"} err="failed to get container status \"9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7\": rpc error: code = NotFound desc = could not find container \"9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7\": container with ID starting with 9f8cbaab59ee180b553e69cf1d222d00e711b2c50d0117270ab7d22b3674b8c7 not found: ID does not exist" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.195881 5012 scope.go:117] "RemoveContainer" containerID="90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6" Feb 19 06:01:54 crc kubenswrapper[5012]: E0219 06:01:54.196275 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6\": container with ID starting with 90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6 not found: ID does not exist" containerID="90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.196296 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6"} err="failed to get container status \"90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6\": rpc error: code = NotFound desc = could not find container \"90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6\": container with ID starting with 90298ee7aaa42290c8f9606f62ee198291d0a770b689d1f6e3934a09095119a6 not found: ID does not exist" Feb 19 06:01:54 crc kubenswrapper[5012]: I0219 06:01:54.721345 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d888e883-8262-4386-b91a-14e87cd7fed3" path="/var/lib/kubelet/pods/d888e883-8262-4386-b91a-14e87cd7fed3/volumes" Feb 19 06:02:05 crc kubenswrapper[5012]: I0219 06:02:05.702837 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:02:05 crc kubenswrapper[5012]: E0219 06:02:05.703989 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:02:16 crc kubenswrapper[5012]: I0219 06:02:16.705364 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:02:16 crc kubenswrapper[5012]: E0219 06:02:16.706464 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:02:27 crc kubenswrapper[5012]: I0219 06:02:27.703471 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:02:27 crc kubenswrapper[5012]: E0219 06:02:27.704479 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:02:42 crc kubenswrapper[5012]: I0219 06:02:42.703411 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:02:42 crc kubenswrapper[5012]: E0219 06:02:42.704501 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.191871 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qd2s5"] Feb 19 06:02:56 crc kubenswrapper[5012]: E0219 06:02:56.192998 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d888e883-8262-4386-b91a-14e87cd7fed3" containerName="extract-utilities" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.193047 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d888e883-8262-4386-b91a-14e87cd7fed3" containerName="extract-utilities" Feb 19 06:02:56 crc kubenswrapper[5012]: E0219 06:02:56.193076 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d888e883-8262-4386-b91a-14e87cd7fed3" containerName="extract-content" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.193099 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d888e883-8262-4386-b91a-14e87cd7fed3" containerName="extract-content" Feb 19 06:02:56 crc kubenswrapper[5012]: E0219 06:02:56.193131 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d888e883-8262-4386-b91a-14e87cd7fed3" containerName="registry-server" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.193138 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d888e883-8262-4386-b91a-14e87cd7fed3" containerName="registry-server" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.193386 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d888e883-8262-4386-b91a-14e87cd7fed3" containerName="registry-server" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.195106 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.229354 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qd2s5"] Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.364569 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-catalog-content\") pod \"redhat-marketplace-qd2s5\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.364635 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hldnt\" (UniqueName: \"kubernetes.io/projected/f3f94370-8ffb-4a67-9042-898ee37ed2a8-kube-api-access-hldnt\") pod \"redhat-marketplace-qd2s5\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.364710 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-utilities\") pod \"redhat-marketplace-qd2s5\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.466153 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-catalog-content\") pod \"redhat-marketplace-qd2s5\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.466227 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hldnt\" (UniqueName: \"kubernetes.io/projected/f3f94370-8ffb-4a67-9042-898ee37ed2a8-kube-api-access-hldnt\") pod \"redhat-marketplace-qd2s5\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.466317 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-utilities\") pod \"redhat-marketplace-qd2s5\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.466681 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-catalog-content\") pod \"redhat-marketplace-qd2s5\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.466737 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-utilities\") pod \"redhat-marketplace-qd2s5\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.487402 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hldnt\" (UniqueName: \"kubernetes.io/projected/f3f94370-8ffb-4a67-9042-898ee37ed2a8-kube-api-access-hldnt\") pod \"redhat-marketplace-qd2s5\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.521614 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:02:56 crc kubenswrapper[5012]: I0219 06:02:56.707084 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:02:56 crc kubenswrapper[5012]: E0219 06:02:56.709500 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:02:57 crc kubenswrapper[5012]: I0219 06:02:57.029020 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qd2s5"] Feb 19 06:02:57 crc kubenswrapper[5012]: I0219 06:02:57.835391 5012 generic.go:334] "Generic (PLEG): container finished" podID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerID="0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb" exitCode=0 Feb 19 06:02:57 crc kubenswrapper[5012]: I0219 06:02:57.835484 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qd2s5" event={"ID":"f3f94370-8ffb-4a67-9042-898ee37ed2a8","Type":"ContainerDied","Data":"0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb"} Feb 19 06:02:57 crc kubenswrapper[5012]: I0219 06:02:57.835863 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qd2s5" event={"ID":"f3f94370-8ffb-4a67-9042-898ee37ed2a8","Type":"ContainerStarted","Data":"c230c80d248b01025994ea307fc0f0128580771c2a08a4d5f702819870fdea83"} Feb 19 06:02:58 crc kubenswrapper[5012]: I0219 06:02:58.850662 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qd2s5" event={"ID":"f3f94370-8ffb-4a67-9042-898ee37ed2a8","Type":"ContainerStarted","Data":"7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f"} Feb 19 06:02:59 crc kubenswrapper[5012]: I0219 06:02:59.869498 5012 generic.go:334] "Generic (PLEG): container finished" podID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerID="7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f" exitCode=0 Feb 19 06:02:59 crc kubenswrapper[5012]: I0219 06:02:59.869555 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qd2s5" event={"ID":"f3f94370-8ffb-4a67-9042-898ee37ed2a8","Type":"ContainerDied","Data":"7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f"} Feb 19 06:03:00 crc kubenswrapper[5012]: I0219 06:03:00.882169 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qd2s5" event={"ID":"f3f94370-8ffb-4a67-9042-898ee37ed2a8","Type":"ContainerStarted","Data":"553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff"} Feb 19 06:03:00 crc kubenswrapper[5012]: I0219 06:03:00.903174 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qd2s5" podStartSLOduration=2.469018074 podStartE2EDuration="4.903143987s" podCreationTimestamp="2026-02-19 06:02:56 +0000 UTC" firstStartedPulling="2026-02-19 06:02:57.838323002 +0000 UTC m=+2273.871645571" lastFinishedPulling="2026-02-19 06:03:00.272448905 +0000 UTC m=+2276.305771484" observedRunningTime="2026-02-19 06:03:00.902164403 +0000 UTC m=+2276.935487002" watchObservedRunningTime="2026-02-19 06:03:00.903143987 +0000 UTC m=+2276.936466596" Feb 19 06:03:06 crc kubenswrapper[5012]: I0219 06:03:06.522058 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:03:06 crc kubenswrapper[5012]: I0219 06:03:06.522893 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:03:06 crc kubenswrapper[5012]: I0219 06:03:06.577285 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:03:07 crc kubenswrapper[5012]: I0219 06:03:07.040952 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:03:07 crc kubenswrapper[5012]: I0219 06:03:07.097631 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qd2s5"] Feb 19 06:03:08 crc kubenswrapper[5012]: I0219 06:03:08.704202 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:03:08 crc kubenswrapper[5012]: E0219 06:03:08.704787 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:03:08 crc kubenswrapper[5012]: I0219 06:03:08.979761 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qd2s5" podUID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerName="registry-server" containerID="cri-o://553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff" gracePeriod=2 Feb 19 06:03:09 crc kubenswrapper[5012]: I0219 06:03:09.531290 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:03:09 crc kubenswrapper[5012]: I0219 06:03:09.722471 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-utilities\") pod \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " Feb 19 06:03:09 crc kubenswrapper[5012]: I0219 06:03:09.722555 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hldnt\" (UniqueName: \"kubernetes.io/projected/f3f94370-8ffb-4a67-9042-898ee37ed2a8-kube-api-access-hldnt\") pod \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " Feb 19 06:03:09 crc kubenswrapper[5012]: I0219 06:03:09.722673 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-catalog-content\") pod \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\" (UID: \"f3f94370-8ffb-4a67-9042-898ee37ed2a8\") " Feb 19 06:03:09 crc kubenswrapper[5012]: I0219 06:03:09.724633 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-utilities" (OuterVolumeSpecName: "utilities") pod "f3f94370-8ffb-4a67-9042-898ee37ed2a8" (UID: "f3f94370-8ffb-4a67-9042-898ee37ed2a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:03:09 crc kubenswrapper[5012]: I0219 06:03:09.750389 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3f94370-8ffb-4a67-9042-898ee37ed2a8-kube-api-access-hldnt" (OuterVolumeSpecName: "kube-api-access-hldnt") pod "f3f94370-8ffb-4a67-9042-898ee37ed2a8" (UID: "f3f94370-8ffb-4a67-9042-898ee37ed2a8"). InnerVolumeSpecName "kube-api-access-hldnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:03:09 crc kubenswrapper[5012]: I0219 06:03:09.757093 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3f94370-8ffb-4a67-9042-898ee37ed2a8" (UID: "f3f94370-8ffb-4a67-9042-898ee37ed2a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:03:09 crc kubenswrapper[5012]: I0219 06:03:09.826027 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hldnt\" (UniqueName: \"kubernetes.io/projected/f3f94370-8ffb-4a67-9042-898ee37ed2a8-kube-api-access-hldnt\") on node \"crc\" DevicePath \"\"" Feb 19 06:03:09 crc kubenswrapper[5012]: I0219 06:03:09.826073 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:03:09 crc kubenswrapper[5012]: I0219 06:03:09.826094 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f94370-8ffb-4a67-9042-898ee37ed2a8-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.001877 5012 generic.go:334] "Generic (PLEG): container finished" podID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerID="553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff" exitCode=0 Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.001946 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qd2s5" event={"ID":"f3f94370-8ffb-4a67-9042-898ee37ed2a8","Type":"ContainerDied","Data":"553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff"} Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.001990 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qd2s5" event={"ID":"f3f94370-8ffb-4a67-9042-898ee37ed2a8","Type":"ContainerDied","Data":"c230c80d248b01025994ea307fc0f0128580771c2a08a4d5f702819870fdea83"} Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.002023 5012 scope.go:117] "RemoveContainer" containerID="553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.002206 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qd2s5" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.034738 5012 scope.go:117] "RemoveContainer" containerID="7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.060564 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qd2s5"] Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.062194 5012 scope.go:117] "RemoveContainer" containerID="0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.072764 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qd2s5"] Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.142373 5012 scope.go:117] "RemoveContainer" containerID="553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff" Feb 19 06:03:10 crc kubenswrapper[5012]: E0219 06:03:10.142962 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff\": container with ID starting with 553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff not found: ID does not exist" containerID="553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.143011 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff"} err="failed to get container status \"553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff\": rpc error: code = NotFound desc = could not find container \"553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff\": container with ID starting with 553e7f6052dd87e7dc9b1edf919d933d221771ebc1a20f5dc7177d23029fb0ff not found: ID does not exist" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.143043 5012 scope.go:117] "RemoveContainer" containerID="7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f" Feb 19 06:03:10 crc kubenswrapper[5012]: E0219 06:03:10.145570 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f\": container with ID starting with 7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f not found: ID does not exist" containerID="7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.145618 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f"} err="failed to get container status \"7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f\": rpc error: code = NotFound desc = could not find container \"7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f\": container with ID starting with 7e2b8598ba91c82bb53cf8f54c034b96c28c0d9d67ce979b8308a81730a7507f not found: ID does not exist" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.145647 5012 scope.go:117] "RemoveContainer" containerID="0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb" Feb 19 06:03:10 crc kubenswrapper[5012]: E0219 06:03:10.146026 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb\": container with ID starting with 0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb not found: ID does not exist" containerID="0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.146060 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb"} err="failed to get container status \"0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb\": rpc error: code = NotFound desc = could not find container \"0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb\": container with ID starting with 0a929a432c6b921bf3950fe93f3b38ba9867b2a143bf016d9ff0421a5504b6eb not found: ID does not exist" Feb 19 06:03:10 crc kubenswrapper[5012]: I0219 06:03:10.724714 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" path="/var/lib/kubelet/pods/f3f94370-8ffb-4a67-9042-898ee37ed2a8/volumes" Feb 19 06:03:19 crc kubenswrapper[5012]: I0219 06:03:19.703446 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:03:19 crc kubenswrapper[5012]: E0219 06:03:19.704279 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:03:33 crc kubenswrapper[5012]: I0219 06:03:33.703548 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:03:33 crc kubenswrapper[5012]: E0219 06:03:33.704666 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:03:47 crc kubenswrapper[5012]: I0219 06:03:47.704713 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:03:47 crc kubenswrapper[5012]: E0219 06:03:47.706627 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:04:01 crc kubenswrapper[5012]: I0219 06:04:01.702810 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:04:01 crc kubenswrapper[5012]: E0219 06:04:01.703832 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:04:12 crc kubenswrapper[5012]: I0219 06:04:12.703776 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:04:12 crc kubenswrapper[5012]: E0219 06:04:12.704821 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:04:14 crc kubenswrapper[5012]: I0219 06:04:14.811433 5012 generic.go:334] "Generic (PLEG): container finished" podID="fcace677-35b0-499f-998c-99168fbfa0af" containerID="ead8d5cbfbadc07cdc6949287d7eaad0d3adb71e861dbd504d482651d9e45f96" exitCode=0 Feb 19 06:04:14 crc kubenswrapper[5012]: I0219 06:04:14.811579 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" event={"ID":"fcace677-35b0-499f-998c-99168fbfa0af","Type":"ContainerDied","Data":"ead8d5cbfbadc07cdc6949287d7eaad0d3adb71e861dbd504d482651d9e45f96"} Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.284211 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.348101 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-combined-ca-bundle\") pod \"fcace677-35b0-499f-998c-99168fbfa0af\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.348297 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-secret-0\") pod \"fcace677-35b0-499f-998c-99168fbfa0af\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.348429 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-ssh-key-openstack-edpm-ipam\") pod \"fcace677-35b0-499f-998c-99168fbfa0af\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.348955 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-inventory\") pod \"fcace677-35b0-499f-998c-99168fbfa0af\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.349055 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g9n7\" (UniqueName: \"kubernetes.io/projected/fcace677-35b0-499f-998c-99168fbfa0af-kube-api-access-6g9n7\") pod \"fcace677-35b0-499f-998c-99168fbfa0af\" (UID: \"fcace677-35b0-499f-998c-99168fbfa0af\") " Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.356647 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcace677-35b0-499f-998c-99168fbfa0af-kube-api-access-6g9n7" (OuterVolumeSpecName: "kube-api-access-6g9n7") pod "fcace677-35b0-499f-998c-99168fbfa0af" (UID: "fcace677-35b0-499f-998c-99168fbfa0af"). InnerVolumeSpecName "kube-api-access-6g9n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.357106 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "fcace677-35b0-499f-998c-99168fbfa0af" (UID: "fcace677-35b0-499f-998c-99168fbfa0af"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.380164 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-inventory" (OuterVolumeSpecName: "inventory") pod "fcace677-35b0-499f-998c-99168fbfa0af" (UID: "fcace677-35b0-499f-998c-99168fbfa0af"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.393842 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fcace677-35b0-499f-998c-99168fbfa0af" (UID: "fcace677-35b0-499f-998c-99168fbfa0af"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.408004 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "fcace677-35b0-499f-998c-99168fbfa0af" (UID: "fcace677-35b0-499f-998c-99168fbfa0af"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.452479 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.452514 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g9n7\" (UniqueName: \"kubernetes.io/projected/fcace677-35b0-499f-998c-99168fbfa0af-kube-api-access-6g9n7\") on node \"crc\" DevicePath \"\"" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.452525 5012 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.452535 5012 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.452544 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fcace677-35b0-499f-998c-99168fbfa0af-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.836022 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" event={"ID":"fcace677-35b0-499f-998c-99168fbfa0af","Type":"ContainerDied","Data":"845fa55489eb1ebebf023adf297c3cff09eae6d31e26dd2248e57ae7baeee857"} Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.836447 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="845fa55489eb1ebebf023adf297c3cff09eae6d31e26dd2248e57ae7baeee857" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.836162 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-2n79s" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.957545 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4"] Feb 19 06:04:16 crc kubenswrapper[5012]: E0219 06:04:16.958223 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerName="extract-content" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.958255 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerName="extract-content" Feb 19 06:04:16 crc kubenswrapper[5012]: E0219 06:04:16.958280 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcace677-35b0-499f-998c-99168fbfa0af" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.958294 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcace677-35b0-499f-998c-99168fbfa0af" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 06:04:16 crc kubenswrapper[5012]: E0219 06:04:16.958355 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerName="registry-server" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.958368 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerName="registry-server" Feb 19 06:04:16 crc kubenswrapper[5012]: E0219 06:04:16.958405 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerName="extract-utilities" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.958418 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerName="extract-utilities" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.958743 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcace677-35b0-499f-998c-99168fbfa0af" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.958814 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3f94370-8ffb-4a67-9042-898ee37ed2a8" containerName="registry-server" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.959986 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.962661 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.963948 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.964142 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.964437 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.964622 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.964632 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.970768 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4"] Feb 19 06:04:16 crc kubenswrapper[5012]: I0219 06:04:16.971694 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.065689 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.065744 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a6116441-2985-4723-9889-6c3422159243-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.065814 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.065839 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.065897 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.065953 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.066032 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.066129 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.066150 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrwww\" (UniqueName: \"kubernetes.io/projected/a6116441-2985-4723-9889-6c3422159243-kube-api-access-mrwww\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.066171 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.066450 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.168587 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.168636 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrwww\" (UniqueName: \"kubernetes.io/projected/a6116441-2985-4723-9889-6c3422159243-kube-api-access-mrwww\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.168677 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.168786 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.168866 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.168909 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a6116441-2985-4723-9889-6c3422159243-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.168981 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.169016 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.169091 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.169119 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.169153 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.170820 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a6116441-2985-4723-9889-6c3422159243-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.173622 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.174966 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.175800 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.176268 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.177143 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.177272 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.178850 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.179388 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.188426 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.196180 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrwww\" (UniqueName: \"kubernetes.io/projected/a6116441-2985-4723-9889-6c3422159243-kube-api-access-mrwww\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p67w4\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.284001 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:04:17 crc kubenswrapper[5012]: I0219 06:04:17.863718 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4"] Feb 19 06:04:18 crc kubenswrapper[5012]: I0219 06:04:18.863058 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" event={"ID":"a6116441-2985-4723-9889-6c3422159243","Type":"ContainerStarted","Data":"967e14daca86ede14e132cc858325fa0f57a4633145bebf5ee02898a3d72c1e2"} Feb 19 06:04:18 crc kubenswrapper[5012]: I0219 06:04:18.863835 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" event={"ID":"a6116441-2985-4723-9889-6c3422159243","Type":"ContainerStarted","Data":"5de839f474f5b20e3d0844ff7d1bc3e34f78d929794f0ed3f351fca954643e98"} Feb 19 06:04:18 crc kubenswrapper[5012]: I0219 06:04:18.894050 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" podStartSLOduration=2.421512818 podStartE2EDuration="2.89401592s" podCreationTimestamp="2026-02-19 06:04:16 +0000 UTC" firstStartedPulling="2026-02-19 06:04:17.866954598 +0000 UTC m=+2353.900277207" lastFinishedPulling="2026-02-19 06:04:18.33945771 +0000 UTC m=+2354.372780309" observedRunningTime="2026-02-19 06:04:18.880685343 +0000 UTC m=+2354.914007942" watchObservedRunningTime="2026-02-19 06:04:18.89401592 +0000 UTC m=+2354.927338529" Feb 19 06:04:25 crc kubenswrapper[5012]: I0219 06:04:25.703464 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:04:25 crc kubenswrapper[5012]: E0219 06:04:25.704567 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:04:36 crc kubenswrapper[5012]: I0219 06:04:36.703393 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:04:36 crc kubenswrapper[5012]: E0219 06:04:36.704634 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:04:48 crc kubenswrapper[5012]: I0219 06:04:48.702928 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:04:48 crc kubenswrapper[5012]: E0219 06:04:48.703689 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:04:59 crc kubenswrapper[5012]: I0219 06:04:59.703391 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:04:59 crc kubenswrapper[5012]: E0219 06:04:59.704635 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:05:12 crc kubenswrapper[5012]: I0219 06:05:12.703548 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:05:12 crc kubenswrapper[5012]: E0219 06:05:12.704732 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:05:25 crc kubenswrapper[5012]: I0219 06:05:25.702901 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:05:25 crc kubenswrapper[5012]: E0219 06:05:25.703726 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:05:37 crc kubenswrapper[5012]: I0219 06:05:37.704792 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:05:37 crc kubenswrapper[5012]: E0219 06:05:37.706228 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:05:51 crc kubenswrapper[5012]: I0219 06:05:51.703662 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:05:51 crc kubenswrapper[5012]: E0219 06:05:51.704953 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:06:02 crc kubenswrapper[5012]: I0219 06:06:02.703634 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:06:02 crc kubenswrapper[5012]: E0219 06:06:02.704422 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.691951 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ctxn5"] Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.696011 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.723238 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ctxn5"] Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.804221 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4s7m\" (UniqueName: \"kubernetes.io/projected/c880ffe9-ca26-4a2a-bab2-3343004ff665-kube-api-access-w4s7m\") pod \"community-operators-ctxn5\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.804321 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-catalog-content\") pod \"community-operators-ctxn5\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.804343 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-utilities\") pod \"community-operators-ctxn5\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.906087 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4s7m\" (UniqueName: \"kubernetes.io/projected/c880ffe9-ca26-4a2a-bab2-3343004ff665-kube-api-access-w4s7m\") pod \"community-operators-ctxn5\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.906197 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-catalog-content\") pod \"community-operators-ctxn5\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.906225 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-utilities\") pod \"community-operators-ctxn5\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.906679 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-catalog-content\") pod \"community-operators-ctxn5\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.906794 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-utilities\") pod \"community-operators-ctxn5\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:12 crc kubenswrapper[5012]: I0219 06:06:12.928159 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4s7m\" (UniqueName: \"kubernetes.io/projected/c880ffe9-ca26-4a2a-bab2-3343004ff665-kube-api-access-w4s7m\") pod \"community-operators-ctxn5\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:13 crc kubenswrapper[5012]: I0219 06:06:13.074797 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:13 crc kubenswrapper[5012]: I0219 06:06:13.618259 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ctxn5"] Feb 19 06:06:14 crc kubenswrapper[5012]: I0219 06:06:14.312466 5012 generic.go:334] "Generic (PLEG): container finished" podID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerID="8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde" exitCode=0 Feb 19 06:06:14 crc kubenswrapper[5012]: I0219 06:06:14.312527 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctxn5" event={"ID":"c880ffe9-ca26-4a2a-bab2-3343004ff665","Type":"ContainerDied","Data":"8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde"} Feb 19 06:06:14 crc kubenswrapper[5012]: I0219 06:06:14.312857 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctxn5" event={"ID":"c880ffe9-ca26-4a2a-bab2-3343004ff665","Type":"ContainerStarted","Data":"db61563b27fc6fc93701d7a6fee141f2ddcf944bdc688ed83b67029233dc21be"} Feb 19 06:06:16 crc kubenswrapper[5012]: I0219 06:06:16.334532 5012 generic.go:334] "Generic (PLEG): container finished" podID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerID="2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12" exitCode=0 Feb 19 06:06:16 crc kubenswrapper[5012]: I0219 06:06:16.334603 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctxn5" event={"ID":"c880ffe9-ca26-4a2a-bab2-3343004ff665","Type":"ContainerDied","Data":"2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12"} Feb 19 06:06:17 crc kubenswrapper[5012]: I0219 06:06:17.347662 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctxn5" event={"ID":"c880ffe9-ca26-4a2a-bab2-3343004ff665","Type":"ContainerStarted","Data":"4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3"} Feb 19 06:06:17 crc kubenswrapper[5012]: I0219 06:06:17.371641 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ctxn5" podStartSLOduration=2.905620774 podStartE2EDuration="5.371626377s" podCreationTimestamp="2026-02-19 06:06:12 +0000 UTC" firstStartedPulling="2026-02-19 06:06:14.314615422 +0000 UTC m=+2470.347937991" lastFinishedPulling="2026-02-19 06:06:16.780621025 +0000 UTC m=+2472.813943594" observedRunningTime="2026-02-19 06:06:17.367048625 +0000 UTC m=+2473.400371184" watchObservedRunningTime="2026-02-19 06:06:17.371626377 +0000 UTC m=+2473.404948946" Feb 19 06:06:17 crc kubenswrapper[5012]: I0219 06:06:17.703720 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:06:18 crc kubenswrapper[5012]: I0219 06:06:18.359052 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"8f0e2de409f869f343439fd788a0683b28b6e560ce8f601661640064fc2c4afc"} Feb 19 06:06:23 crc kubenswrapper[5012]: I0219 06:06:23.075566 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:23 crc kubenswrapper[5012]: I0219 06:06:23.077119 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:23 crc kubenswrapper[5012]: I0219 06:06:23.125862 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:23 crc kubenswrapper[5012]: I0219 06:06:23.474281 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:25 crc kubenswrapper[5012]: I0219 06:06:25.892539 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ctxn5"] Feb 19 06:06:26 crc kubenswrapper[5012]: I0219 06:06:26.435359 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ctxn5" podUID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerName="registry-server" containerID="cri-o://4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3" gracePeriod=2 Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.006438 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.111127 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-catalog-content\") pod \"c880ffe9-ca26-4a2a-bab2-3343004ff665\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.111181 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4s7m\" (UniqueName: \"kubernetes.io/projected/c880ffe9-ca26-4a2a-bab2-3343004ff665-kube-api-access-w4s7m\") pod \"c880ffe9-ca26-4a2a-bab2-3343004ff665\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.111259 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-utilities\") pod \"c880ffe9-ca26-4a2a-bab2-3343004ff665\" (UID: \"c880ffe9-ca26-4a2a-bab2-3343004ff665\") " Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.112554 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-utilities" (OuterVolumeSpecName: "utilities") pod "c880ffe9-ca26-4a2a-bab2-3343004ff665" (UID: "c880ffe9-ca26-4a2a-bab2-3343004ff665"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.116750 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c880ffe9-ca26-4a2a-bab2-3343004ff665-kube-api-access-w4s7m" (OuterVolumeSpecName: "kube-api-access-w4s7m") pod "c880ffe9-ca26-4a2a-bab2-3343004ff665" (UID: "c880ffe9-ca26-4a2a-bab2-3343004ff665"). InnerVolumeSpecName "kube-api-access-w4s7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.175521 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c880ffe9-ca26-4a2a-bab2-3343004ff665" (UID: "c880ffe9-ca26-4a2a-bab2-3343004ff665"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.213727 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.214063 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c880ffe9-ca26-4a2a-bab2-3343004ff665-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.214074 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4s7m\" (UniqueName: \"kubernetes.io/projected/c880ffe9-ca26-4a2a-bab2-3343004ff665-kube-api-access-w4s7m\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.450736 5012 generic.go:334] "Generic (PLEG): container finished" podID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerID="4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3" exitCode=0 Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.450779 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctxn5" event={"ID":"c880ffe9-ca26-4a2a-bab2-3343004ff665","Type":"ContainerDied","Data":"4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3"} Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.450806 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ctxn5" event={"ID":"c880ffe9-ca26-4a2a-bab2-3343004ff665","Type":"ContainerDied","Data":"db61563b27fc6fc93701d7a6fee141f2ddcf944bdc688ed83b67029233dc21be"} Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.450837 5012 scope.go:117] "RemoveContainer" containerID="4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.450865 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ctxn5" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.497597 5012 scope.go:117] "RemoveContainer" containerID="2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.530204 5012 scope.go:117] "RemoveContainer" containerID="8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.530364 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ctxn5"] Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.551783 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ctxn5"] Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.574804 5012 scope.go:117] "RemoveContainer" containerID="4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3" Feb 19 06:06:27 crc kubenswrapper[5012]: E0219 06:06:27.575405 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3\": container with ID starting with 4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3 not found: ID does not exist" containerID="4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.575467 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3"} err="failed to get container status \"4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3\": rpc error: code = NotFound desc = could not find container \"4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3\": container with ID starting with 4214cfe52d5c8165d9275fc0e97898366eda17278c3c048b88af0f027987dee3 not found: ID does not exist" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.575500 5012 scope.go:117] "RemoveContainer" containerID="2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12" Feb 19 06:06:27 crc kubenswrapper[5012]: E0219 06:06:27.576267 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12\": container with ID starting with 2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12 not found: ID does not exist" containerID="2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.576356 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12"} err="failed to get container status \"2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12\": rpc error: code = NotFound desc = could not find container \"2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12\": container with ID starting with 2be81a27d6a631004108b7efaa04b559c518d6c0dc0eb8a4ece7418e62c3ba12 not found: ID does not exist" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.576389 5012 scope.go:117] "RemoveContainer" containerID="8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde" Feb 19 06:06:27 crc kubenswrapper[5012]: E0219 06:06:27.576686 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde\": container with ID starting with 8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde not found: ID does not exist" containerID="8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde" Feb 19 06:06:27 crc kubenswrapper[5012]: I0219 06:06:27.576713 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde"} err="failed to get container status \"8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde\": rpc error: code = NotFound desc = could not find container \"8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde\": container with ID starting with 8f604dc276d6601864f2f56f3764a7f4be70c9e59a3a0c300edc6e4edb72fbde not found: ID does not exist" Feb 19 06:06:28 crc kubenswrapper[5012]: I0219 06:06:28.716345 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c880ffe9-ca26-4a2a-bab2-3343004ff665" path="/var/lib/kubelet/pods/c880ffe9-ca26-4a2a-bab2-3343004ff665/volumes" Feb 19 06:06:50 crc kubenswrapper[5012]: I0219 06:06:50.744136 5012 generic.go:334] "Generic (PLEG): container finished" podID="a6116441-2985-4723-9889-6c3422159243" containerID="967e14daca86ede14e132cc858325fa0f57a4633145bebf5ee02898a3d72c1e2" exitCode=0 Feb 19 06:06:50 crc kubenswrapper[5012]: I0219 06:06:50.744657 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" event={"ID":"a6116441-2985-4723-9889-6c3422159243","Type":"ContainerDied","Data":"967e14daca86ede14e132cc858325fa0f57a4633145bebf5ee02898a3d72c1e2"} Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.262836 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.384265 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-inventory\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.384360 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-0\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.384396 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrwww\" (UniqueName: \"kubernetes.io/projected/a6116441-2985-4723-9889-6c3422159243-kube-api-access-mrwww\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.384437 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-ssh-key-openstack-edpm-ipam\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.384457 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-2\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.384529 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a6116441-2985-4723-9889-6c3422159243-nova-extra-config-0\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.384592 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-0\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.384672 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-1\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.385365 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-combined-ca-bundle\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.385401 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-3\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.385467 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-1\") pod \"a6116441-2985-4723-9889-6c3422159243\" (UID: \"a6116441-2985-4723-9889-6c3422159243\") " Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.393778 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6116441-2985-4723-9889-6c3422159243-kube-api-access-mrwww" (OuterVolumeSpecName: "kube-api-access-mrwww") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "kube-api-access-mrwww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.396442 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.434427 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.435454 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.440852 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-inventory" (OuterVolumeSpecName: "inventory") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.444177 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.457872 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.463883 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.464694 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.466924 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.467436 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6116441-2985-4723-9889-6c3422159243-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "a6116441-2985-4723-9889-6c3422159243" (UID: "a6116441-2985-4723-9889-6c3422159243"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489266 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489352 5012 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489370 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrwww\" (UniqueName: \"kubernetes.io/projected/a6116441-2985-4723-9889-6c3422159243-kube-api-access-mrwww\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489383 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489396 5012 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489408 5012 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/a6116441-2985-4723-9889-6c3422159243-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489420 5012 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489432 5012 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489443 5012 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489454 5012 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.489465 5012 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/a6116441-2985-4723-9889-6c3422159243-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.802174 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" event={"ID":"a6116441-2985-4723-9889-6c3422159243","Type":"ContainerDied","Data":"5de839f474f5b20e3d0844ff7d1bc3e34f78d929794f0ed3f351fca954643e98"} Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.802210 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5de839f474f5b20e3d0844ff7d1bc3e34f78d929794f0ed3f351fca954643e98" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.802461 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p67w4" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.897714 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx"] Feb 19 06:06:52 crc kubenswrapper[5012]: E0219 06:06:52.898129 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerName="extract-utilities" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.898145 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerName="extract-utilities" Feb 19 06:06:52 crc kubenswrapper[5012]: E0219 06:06:52.898157 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6116441-2985-4723-9889-6c3422159243" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.898165 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6116441-2985-4723-9889-6c3422159243" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 06:06:52 crc kubenswrapper[5012]: E0219 06:06:52.898176 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerName="registry-server" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.898181 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerName="registry-server" Feb 19 06:06:52 crc kubenswrapper[5012]: E0219 06:06:52.898205 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerName="extract-content" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.898212 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerName="extract-content" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.898424 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="c880ffe9-ca26-4a2a-bab2-3343004ff665" containerName="registry-server" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.898439 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6116441-2985-4723-9889-6c3422159243" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.899071 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.903787 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.903917 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.903937 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sfbp2" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.903874 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.904257 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 06:06:52 crc kubenswrapper[5012]: I0219 06:06:52.908655 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx"] Feb 19 06:06:52 crc kubenswrapper[5012]: E0219 06:06:52.964697 5012 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6116441_2985_4723_9889_6c3422159243.slice\": RecentStats: unable to find data in memory cache]" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.001606 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.001660 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.001695 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.002124 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.002233 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.002299 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.002422 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m6vg\" (UniqueName: \"kubernetes.io/projected/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-kube-api-access-9m6vg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.103710 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.103781 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.103829 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.103877 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m6vg\" (UniqueName: \"kubernetes.io/projected/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-kube-api-access-9m6vg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.103911 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.103943 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.103980 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.107602 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.108111 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.108926 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.109763 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.110147 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.110717 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.122347 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m6vg\" (UniqueName: \"kubernetes.io/projected/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-kube-api-access-9m6vg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.246258 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.876701 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx"] Feb 19 06:06:53 crc kubenswrapper[5012]: W0219 06:06:53.882992 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73fe066f_3ee6_4ffc_aeb4_874c14fb0b84.slice/crio-10f98d940b12a27091c615ddb113b42fc9f4de3bee9bf6a32525d3715e64dd37 WatchSource:0}: Error finding container 10f98d940b12a27091c615ddb113b42fc9f4de3bee9bf6a32525d3715e64dd37: Status 404 returned error can't find the container with id 10f98d940b12a27091c615ddb113b42fc9f4de3bee9bf6a32525d3715e64dd37 Feb 19 06:06:53 crc kubenswrapper[5012]: I0219 06:06:53.886529 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 06:06:54 crc kubenswrapper[5012]: I0219 06:06:54.838402 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" event={"ID":"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84","Type":"ContainerStarted","Data":"66c2389b42efddeb455e935a3386251734490b5a184dbebd586b025e56124a97"} Feb 19 06:06:54 crc kubenswrapper[5012]: I0219 06:06:54.838692 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" event={"ID":"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84","Type":"ContainerStarted","Data":"10f98d940b12a27091c615ddb113b42fc9f4de3bee9bf6a32525d3715e64dd37"} Feb 19 06:06:54 crc kubenswrapper[5012]: I0219 06:06:54.863849 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" podStartSLOduration=2.416563312 podStartE2EDuration="2.863825697s" podCreationTimestamp="2026-02-19 06:06:52 +0000 UTC" firstStartedPulling="2026-02-19 06:06:53.886261296 +0000 UTC m=+2509.919583875" lastFinishedPulling="2026-02-19 06:06:54.333523651 +0000 UTC m=+2510.366846260" observedRunningTime="2026-02-19 06:06:54.857529533 +0000 UTC m=+2510.890852182" watchObservedRunningTime="2026-02-19 06:06:54.863825697 +0000 UTC m=+2510.897148286" Feb 19 06:08:44 crc kubenswrapper[5012]: I0219 06:08:44.430826 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:08:44 crc kubenswrapper[5012]: I0219 06:08:44.431467 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:08:56 crc kubenswrapper[5012]: I0219 06:08:56.251506 5012 generic.go:334] "Generic (PLEG): container finished" podID="73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" containerID="66c2389b42efddeb455e935a3386251734490b5a184dbebd586b025e56124a97" exitCode=0 Feb 19 06:08:56 crc kubenswrapper[5012]: I0219 06:08:56.251576 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" event={"ID":"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84","Type":"ContainerDied","Data":"66c2389b42efddeb455e935a3386251734490b5a184dbebd586b025e56124a97"} Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.718393 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.865445 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m6vg\" (UniqueName: \"kubernetes.io/projected/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-kube-api-access-9m6vg\") pod \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.865891 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-inventory\") pod \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.866659 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-telemetry-combined-ca-bundle\") pod \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.866814 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-2\") pod \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.866895 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-0\") pod \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.867030 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ssh-key-openstack-edpm-ipam\") pod \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.867144 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-1\") pod \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\" (UID: \"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84\") " Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.872021 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" (UID: "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.873331 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-kube-api-access-9m6vg" (OuterVolumeSpecName: "kube-api-access-9m6vg") pod "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" (UID: "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84"). InnerVolumeSpecName "kube-api-access-9m6vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.909874 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-inventory" (OuterVolumeSpecName: "inventory") pod "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" (UID: "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.910146 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" (UID: "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.915805 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" (UID: "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.916331 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" (UID: "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.933478 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" (UID: "73fe066f-3ee6-4ffc-aeb4-874c14fb0b84"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.970929 5012 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.970990 5012 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.971017 5012 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.971038 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.971058 5012 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.971079 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m6vg\" (UniqueName: \"kubernetes.io/projected/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-kube-api-access-9m6vg\") on node \"crc\" DevicePath \"\"" Feb 19 06:08:57 crc kubenswrapper[5012]: I0219 06:08:57.971099 5012 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73fe066f-3ee6-4ffc-aeb4-874c14fb0b84-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 06:08:58 crc kubenswrapper[5012]: I0219 06:08:58.278000 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" event={"ID":"73fe066f-3ee6-4ffc-aeb4-874c14fb0b84","Type":"ContainerDied","Data":"10f98d940b12a27091c615ddb113b42fc9f4de3bee9bf6a32525d3715e64dd37"} Feb 19 06:08:58 crc kubenswrapper[5012]: I0219 06:08:58.278051 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10f98d940b12a27091c615ddb113b42fc9f4de3bee9bf6a32525d3715e64dd37" Feb 19 06:08:58 crc kubenswrapper[5012]: I0219 06:08:58.278132 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx" Feb 19 06:09:14 crc kubenswrapper[5012]: I0219 06:09:14.431193 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:09:14 crc kubenswrapper[5012]: I0219 06:09:14.432278 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.381506 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5nvpj"] Feb 19 06:09:38 crc kubenswrapper[5012]: E0219 06:09:38.382953 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.382979 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.383374 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="73fe066f-3ee6-4ffc-aeb4-874c14fb0b84" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.392103 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.396744 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5nvpj"] Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.494407 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-utilities\") pod \"redhat-operators-5nvpj\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.494861 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-catalog-content\") pod \"redhat-operators-5nvpj\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.494891 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp8np\" (UniqueName: \"kubernetes.io/projected/594190bd-cf27-4446-b5b9-7fb84361c200-kube-api-access-sp8np\") pod \"redhat-operators-5nvpj\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.597118 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-utilities\") pod \"redhat-operators-5nvpj\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.597292 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-catalog-content\") pod \"redhat-operators-5nvpj\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.597386 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp8np\" (UniqueName: \"kubernetes.io/projected/594190bd-cf27-4446-b5b9-7fb84361c200-kube-api-access-sp8np\") pod \"redhat-operators-5nvpj\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.597905 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-utilities\") pod \"redhat-operators-5nvpj\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.598013 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-catalog-content\") pod \"redhat-operators-5nvpj\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.638616 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp8np\" (UniqueName: \"kubernetes.io/projected/594190bd-cf27-4446-b5b9-7fb84361c200-kube-api-access-sp8np\") pod \"redhat-operators-5nvpj\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.742869 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.912366 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.913070 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="prometheus" containerID="cri-o://2911dc6ac75bd4dfdfed36bc08cc01049520edecc0e49a7a619bb704bce3f33a" gracePeriod=600 Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.913355 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="thanos-sidecar" containerID="cri-o://2854f6610edd35f9918bcf970a2c86698cd9bdd18894ce4faa0b91d3747adc47" gracePeriod=600 Feb 19 06:09:38 crc kubenswrapper[5012]: I0219 06:09:38.913462 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="config-reloader" containerID="cri-o://fd666beb3889b82cdcffe025f5999afc68f3be8d81898ab269494cf52c444649" gracePeriod=600 Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.287844 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5nvpj"] Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.742703 5012 generic.go:334] "Generic (PLEG): container finished" podID="594190bd-cf27-4446-b5b9-7fb84361c200" containerID="219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89" exitCode=0 Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.742787 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nvpj" event={"ID":"594190bd-cf27-4446-b5b9-7fb84361c200","Type":"ContainerDied","Data":"219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89"} Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.743839 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nvpj" event={"ID":"594190bd-cf27-4446-b5b9-7fb84361c200","Type":"ContainerStarted","Data":"535e54cd69617ddc350f2f90e81fe2153053b3c460de6253cf093e44a7c59c54"} Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.752019 5012 generic.go:334] "Generic (PLEG): container finished" podID="8509cc68-c35e-47ea-a634-896143d747ed" containerID="2854f6610edd35f9918bcf970a2c86698cd9bdd18894ce4faa0b91d3747adc47" exitCode=0 Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.752056 5012 generic.go:334] "Generic (PLEG): container finished" podID="8509cc68-c35e-47ea-a634-896143d747ed" containerID="fd666beb3889b82cdcffe025f5999afc68f3be8d81898ab269494cf52c444649" exitCode=0 Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.752070 5012 generic.go:334] "Generic (PLEG): container finished" podID="8509cc68-c35e-47ea-a634-896143d747ed" containerID="2911dc6ac75bd4dfdfed36bc08cc01049520edecc0e49a7a619bb704bce3f33a" exitCode=0 Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.752101 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8509cc68-c35e-47ea-a634-896143d747ed","Type":"ContainerDied","Data":"2854f6610edd35f9918bcf970a2c86698cd9bdd18894ce4faa0b91d3747adc47"} Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.752143 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8509cc68-c35e-47ea-a634-896143d747ed","Type":"ContainerDied","Data":"fd666beb3889b82cdcffe025f5999afc68f3be8d81898ab269494cf52c444649"} Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.752158 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8509cc68-c35e-47ea-a634-896143d747ed","Type":"ContainerDied","Data":"2911dc6ac75bd4dfdfed36bc08cc01049520edecc0e49a7a619bb704bce3f33a"} Feb 19 06:09:39 crc kubenswrapper[5012]: I0219 06:09:39.966301 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.032575 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-tls-assets\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.032641 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.032747 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.032813 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8509cc68-c35e-47ea-a634-896143d747ed-config-out\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.032887 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.032916 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-2\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.033040 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq95l\" (UniqueName: \"kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-kube-api-access-tq95l\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.033082 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-1\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.033199 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-secret-combined-ca-bundle\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.033242 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-config\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.033269 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-thanos-prometheus-http-client-file\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.033334 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-0\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.033474 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"8509cc68-c35e-47ea-a634-896143d747ed\" (UID: \"8509cc68-c35e-47ea-a634-896143d747ed\") " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.038712 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.042236 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.042482 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.045548 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.045741 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-config" (OuterVolumeSpecName: "config") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.048512 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.048686 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8509cc68-c35e-47ea-a634-896143d747ed-config-out" (OuterVolumeSpecName: "config-out") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.052374 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.053085 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.067100 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-kube-api-access-tq95l" (OuterVolumeSpecName: "kube-api-access-tq95l") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "kube-api-access-tq95l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.067359 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.092489 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "pvc-7fbf442c-c467-48a5-9a2f-86a74d778584". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135701 5012 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135733 5012 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8509cc68-c35e-47ea-a634-896143d747ed-config-out\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135744 5012 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135755 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq95l\" (UniqueName: \"kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-kube-api-access-tq95l\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135766 5012 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135778 5012 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135786 5012 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-config\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135795 5012 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135804 5012 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8509cc68-c35e-47ea-a634-896143d747ed-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135839 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") on node \"crc\" " Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135850 5012 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8509cc68-c35e-47ea-a634-896143d747ed-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.135860 5012 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.147972 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config" (OuterVolumeSpecName: "web-config") pod "8509cc68-c35e-47ea-a634-896143d747ed" (UID: "8509cc68-c35e-47ea-a634-896143d747ed"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.159938 5012 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.160076 5012 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7fbf442c-c467-48a5-9a2f-86a74d778584" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584") on node "crc" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.243129 5012 reconciler_common.go:293] "Volume detached for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.243166 5012 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8509cc68-c35e-47ea-a634-896143d747ed-web-config\") on node \"crc\" DevicePath \"\"" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.771061 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8509cc68-c35e-47ea-a634-896143d747ed","Type":"ContainerDied","Data":"c258d2d68f577aa99acf781abe70e8c1f0bea84a31b7c56b2eca30c2af015cb5"} Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.771446 5012 scope.go:117] "RemoveContainer" containerID="2854f6610edd35f9918bcf970a2c86698cd9bdd18894ce4faa0b91d3747adc47" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.771191 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.815524 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.832789 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.838809 5012 scope.go:117] "RemoveContainer" containerID="fd666beb3889b82cdcffe025f5999afc68f3be8d81898ab269494cf52c444649" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.866698 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 06:09:40 crc kubenswrapper[5012]: E0219 06:09:40.872897 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="thanos-sidecar" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.872915 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="thanos-sidecar" Feb 19 06:09:40 crc kubenswrapper[5012]: E0219 06:09:40.872928 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="prometheus" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.872936 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="prometheus" Feb 19 06:09:40 crc kubenswrapper[5012]: E0219 06:09:40.872954 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="init-config-reloader" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.872961 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="init-config-reloader" Feb 19 06:09:40 crc kubenswrapper[5012]: E0219 06:09:40.872990 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="config-reloader" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.872997 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="config-reloader" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.873197 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="prometheus" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.873221 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="config-reloader" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.873238 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="8509cc68-c35e-47ea-a634-896143d747ed" containerName="thanos-sidecar" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.879196 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.888514 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.888708 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.888976 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.889103 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.889203 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.889452 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7bqtw" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.889587 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.893513 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.894732 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.907897 5012 scope.go:117] "RemoveContainer" containerID="2911dc6ac75bd4dfdfed36bc08cc01049520edecc0e49a7a619bb704bce3f33a" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.947003 5012 scope.go:117] "RemoveContainer" containerID="4ee433ab916c49fcf886f80ee6ab1bd1a03ffacf8d9e4d295c0b15de25056e64" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.977644 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a64b2810-4982-43ef-ae9f-1e7852394d60-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.977705 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.977808 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.978136 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqzlj\" (UniqueName: \"kubernetes.io/projected/a64b2810-4982-43ef-ae9f-1e7852394d60-kube-api-access-wqzlj\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.978354 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.978526 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a64b2810-4982-43ef-ae9f-1e7852394d60-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.978652 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a64b2810-4982-43ef-ae9f-1e7852394d60-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.978682 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.978715 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.978827 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.978863 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-config\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.978919 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a64b2810-4982-43ef-ae9f-1e7852394d60-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:40 crc kubenswrapper[5012]: I0219 06:09:40.978960 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a64b2810-4982-43ef-ae9f-1e7852394d60-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081413 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081499 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a64b2810-4982-43ef-ae9f-1e7852394d60-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081544 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a64b2810-4982-43ef-ae9f-1e7852394d60-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081566 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081592 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081632 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081653 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-config\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081684 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a64b2810-4982-43ef-ae9f-1e7852394d60-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081710 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a64b2810-4982-43ef-ae9f-1e7852394d60-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081750 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a64b2810-4982-43ef-ae9f-1e7852394d60-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081776 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081814 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.081873 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqzlj\" (UniqueName: \"kubernetes.io/projected/a64b2810-4982-43ef-ae9f-1e7852394d60-kube-api-access-wqzlj\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.083116 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/a64b2810-4982-43ef-ae9f-1e7852394d60-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.083272 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/a64b2810-4982-43ef-ae9f-1e7852394d60-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.084419 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/a64b2810-4982-43ef-ae9f-1e7852394d60-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.088262 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.088575 5012 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.088617 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/80266977aa18e8991458f1f7d5520b709fb21586520e915bbacb4bc2380e455f/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.088962 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.089347 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a64b2810-4982-43ef-ae9f-1e7852394d60-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.090334 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a64b2810-4982-43ef-ae9f-1e7852394d60-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.090627 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.090920 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.093791 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.094887 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a64b2810-4982-43ef-ae9f-1e7852394d60-config\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.101979 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqzlj\" (UniqueName: \"kubernetes.io/projected/a64b2810-4982-43ef-ae9f-1e7852394d60-kube-api-access-wqzlj\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.140853 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7fbf442c-c467-48a5-9a2f-86a74d778584\") pod \"prometheus-metric-storage-0\" (UID: \"a64b2810-4982-43ef-ae9f-1e7852394d60\") " pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.216748 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.705088 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.794346 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a64b2810-4982-43ef-ae9f-1e7852394d60","Type":"ContainerStarted","Data":"42ebd1835f5a5dcad484994cd62ab30919c257d2732704e6785d8ab7c963c43f"} Feb 19 06:09:41 crc kubenswrapper[5012]: I0219 06:09:41.796634 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nvpj" event={"ID":"594190bd-cf27-4446-b5b9-7fb84361c200","Type":"ContainerStarted","Data":"00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927"} Feb 19 06:09:42 crc kubenswrapper[5012]: I0219 06:09:42.717674 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8509cc68-c35e-47ea-a634-896143d747ed" path="/var/lib/kubelet/pods/8509cc68-c35e-47ea-a634-896143d747ed/volumes" Feb 19 06:09:44 crc kubenswrapper[5012]: I0219 06:09:44.431109 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:09:44 crc kubenswrapper[5012]: I0219 06:09:44.431211 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:09:44 crc kubenswrapper[5012]: I0219 06:09:44.431280 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:09:44 crc kubenswrapper[5012]: I0219 06:09:44.432452 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8f0e2de409f869f343439fd788a0683b28b6e560ce8f601661640064fc2c4afc"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:09:44 crc kubenswrapper[5012]: I0219 06:09:44.432537 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://8f0e2de409f869f343439fd788a0683b28b6e560ce8f601661640064fc2c4afc" gracePeriod=600 Feb 19 06:09:45 crc kubenswrapper[5012]: I0219 06:09:45.844836 5012 generic.go:334] "Generic (PLEG): container finished" podID="594190bd-cf27-4446-b5b9-7fb84361c200" containerID="00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927" exitCode=0 Feb 19 06:09:45 crc kubenswrapper[5012]: I0219 06:09:45.844948 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nvpj" event={"ID":"594190bd-cf27-4446-b5b9-7fb84361c200","Type":"ContainerDied","Data":"00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927"} Feb 19 06:09:45 crc kubenswrapper[5012]: I0219 06:09:45.849574 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="8f0e2de409f869f343439fd788a0683b28b6e560ce8f601661640064fc2c4afc" exitCode=0 Feb 19 06:09:45 crc kubenswrapper[5012]: I0219 06:09:45.849615 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"8f0e2de409f869f343439fd788a0683b28b6e560ce8f601661640064fc2c4afc"} Feb 19 06:09:45 crc kubenswrapper[5012]: I0219 06:09:45.849655 5012 scope.go:117] "RemoveContainer" containerID="a9c9ed4ab23e63af7bff709484d66c3a2d31864c600072f97a90ed7c48dea57f" Feb 19 06:09:46 crc kubenswrapper[5012]: I0219 06:09:46.861848 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nvpj" event={"ID":"594190bd-cf27-4446-b5b9-7fb84361c200","Type":"ContainerStarted","Data":"a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38"} Feb 19 06:09:46 crc kubenswrapper[5012]: I0219 06:09:46.865929 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc"} Feb 19 06:09:46 crc kubenswrapper[5012]: I0219 06:09:46.869980 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a64b2810-4982-43ef-ae9f-1e7852394d60","Type":"ContainerStarted","Data":"9ff7a9eb8aea62d1cd4f806571829782a58d9fc03c69bc5ba47ce2d1097c2287"} Feb 19 06:09:46 crc kubenswrapper[5012]: I0219 06:09:46.892335 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5nvpj" podStartSLOduration=2.420594742 podStartE2EDuration="8.892291144s" podCreationTimestamp="2026-02-19 06:09:38 +0000 UTC" firstStartedPulling="2026-02-19 06:09:39.745414119 +0000 UTC m=+2675.778736688" lastFinishedPulling="2026-02-19 06:09:46.217110531 +0000 UTC m=+2682.250433090" observedRunningTime="2026-02-19 06:09:46.883266194 +0000 UTC m=+2682.916588783" watchObservedRunningTime="2026-02-19 06:09:46.892291144 +0000 UTC m=+2682.925613733" Feb 19 06:09:48 crc kubenswrapper[5012]: I0219 06:09:48.743597 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:48 crc kubenswrapper[5012]: I0219 06:09:48.744186 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:49 crc kubenswrapper[5012]: I0219 06:09:49.867203 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5nvpj" podUID="594190bd-cf27-4446-b5b9-7fb84361c200" containerName="registry-server" probeResult="failure" output=< Feb 19 06:09:49 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 06:09:49 crc kubenswrapper[5012]: > Feb 19 06:09:55 crc kubenswrapper[5012]: I0219 06:09:55.972545 5012 generic.go:334] "Generic (PLEG): container finished" podID="a64b2810-4982-43ef-ae9f-1e7852394d60" containerID="9ff7a9eb8aea62d1cd4f806571829782a58d9fc03c69bc5ba47ce2d1097c2287" exitCode=0 Feb 19 06:09:55 crc kubenswrapper[5012]: I0219 06:09:55.972664 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a64b2810-4982-43ef-ae9f-1e7852394d60","Type":"ContainerDied","Data":"9ff7a9eb8aea62d1cd4f806571829782a58d9fc03c69bc5ba47ce2d1097c2287"} Feb 19 06:09:56 crc kubenswrapper[5012]: I0219 06:09:56.989270 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a64b2810-4982-43ef-ae9f-1e7852394d60","Type":"ContainerStarted","Data":"fa47c8cd3d12d9ab3e651390eb20631a26da141672dc66baca2e8273fbec7049"} Feb 19 06:09:59 crc kubenswrapper[5012]: I0219 06:09:59.074965 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:59 crc kubenswrapper[5012]: I0219 06:09:59.152187 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:09:59 crc kubenswrapper[5012]: I0219 06:09:59.322350 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5nvpj"] Feb 19 06:10:00 crc kubenswrapper[5012]: I0219 06:10:00.027236 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a64b2810-4982-43ef-ae9f-1e7852394d60","Type":"ContainerStarted","Data":"b991f0143fb8025e1241a5d23e1357686a0bd21e0ac51c7607aae018d4b1a95c"} Feb 19 06:10:00 crc kubenswrapper[5012]: I0219 06:10:00.027278 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"a64b2810-4982-43ef-ae9f-1e7852394d60","Type":"ContainerStarted","Data":"1bb48c199630c15cb81e441508a5db5b10e38e9d52685d4e60840e94f989da67"} Feb 19 06:10:00 crc kubenswrapper[5012]: I0219 06:10:00.057363 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.057344417 podStartE2EDuration="20.057344417s" podCreationTimestamp="2026-02-19 06:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 06:10:00.054428926 +0000 UTC m=+2696.087751495" watchObservedRunningTime="2026-02-19 06:10:00.057344417 +0000 UTC m=+2696.090666986" Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.035327 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5nvpj" podUID="594190bd-cf27-4446-b5b9-7fb84361c200" containerName="registry-server" containerID="cri-o://a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38" gracePeriod=2 Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.217576 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.598869 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.770673 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-catalog-content\") pod \"594190bd-cf27-4446-b5b9-7fb84361c200\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.770961 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp8np\" (UniqueName: \"kubernetes.io/projected/594190bd-cf27-4446-b5b9-7fb84361c200-kube-api-access-sp8np\") pod \"594190bd-cf27-4446-b5b9-7fb84361c200\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.772084 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-utilities\") pod \"594190bd-cf27-4446-b5b9-7fb84361c200\" (UID: \"594190bd-cf27-4446-b5b9-7fb84361c200\") " Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.772944 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-utilities" (OuterVolumeSpecName: "utilities") pod "594190bd-cf27-4446-b5b9-7fb84361c200" (UID: "594190bd-cf27-4446-b5b9-7fb84361c200"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.777387 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/594190bd-cf27-4446-b5b9-7fb84361c200-kube-api-access-sp8np" (OuterVolumeSpecName: "kube-api-access-sp8np") pod "594190bd-cf27-4446-b5b9-7fb84361c200" (UID: "594190bd-cf27-4446-b5b9-7fb84361c200"). InnerVolumeSpecName "kube-api-access-sp8np". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.875077 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.875117 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp8np\" (UniqueName: \"kubernetes.io/projected/594190bd-cf27-4446-b5b9-7fb84361c200-kube-api-access-sp8np\") on node \"crc\" DevicePath \"\"" Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.880984 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "594190bd-cf27-4446-b5b9-7fb84361c200" (UID: "594190bd-cf27-4446-b5b9-7fb84361c200"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:10:01 crc kubenswrapper[5012]: I0219 06:10:01.977046 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/594190bd-cf27-4446-b5b9-7fb84361c200-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.051136 5012 generic.go:334] "Generic (PLEG): container finished" podID="594190bd-cf27-4446-b5b9-7fb84361c200" containerID="a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38" exitCode=0 Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.051236 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5nvpj" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.051348 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nvpj" event={"ID":"594190bd-cf27-4446-b5b9-7fb84361c200","Type":"ContainerDied","Data":"a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38"} Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.051392 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5nvpj" event={"ID":"594190bd-cf27-4446-b5b9-7fb84361c200","Type":"ContainerDied","Data":"535e54cd69617ddc350f2f90e81fe2153053b3c460de6253cf093e44a7c59c54"} Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.051421 5012 scope.go:117] "RemoveContainer" containerID="a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.096805 5012 scope.go:117] "RemoveContainer" containerID="00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.114239 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5nvpj"] Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.128976 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5nvpj"] Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.147962 5012 scope.go:117] "RemoveContainer" containerID="219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.194698 5012 scope.go:117] "RemoveContainer" containerID="a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38" Feb 19 06:10:02 crc kubenswrapper[5012]: E0219 06:10:02.195477 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38\": container with ID starting with a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38 not found: ID does not exist" containerID="a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.195550 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38"} err="failed to get container status \"a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38\": rpc error: code = NotFound desc = could not find container \"a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38\": container with ID starting with a05144dcceeec3c80b85fdaa22f1014ce4d9da4a516aa9c3db23b35303cc5d38 not found: ID does not exist" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.195604 5012 scope.go:117] "RemoveContainer" containerID="00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927" Feb 19 06:10:02 crc kubenswrapper[5012]: E0219 06:10:02.196165 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927\": container with ID starting with 00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927 not found: ID does not exist" containerID="00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.196206 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927"} err="failed to get container status \"00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927\": rpc error: code = NotFound desc = could not find container \"00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927\": container with ID starting with 00feb05f6461d629e38d1999c34f42d6e03d8d047468f6c092638259fd4b2927 not found: ID does not exist" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.196233 5012 scope.go:117] "RemoveContainer" containerID="219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89" Feb 19 06:10:02 crc kubenswrapper[5012]: E0219 06:10:02.196722 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89\": container with ID starting with 219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89 not found: ID does not exist" containerID="219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.196770 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89"} err="failed to get container status \"219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89\": rpc error: code = NotFound desc = could not find container \"219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89\": container with ID starting with 219553efe7a4db6f45dcbb489ef756eece2c6f71f42e1f159b9624665806ec89 not found: ID does not exist" Feb 19 06:10:02 crc kubenswrapper[5012]: I0219 06:10:02.723711 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="594190bd-cf27-4446-b5b9-7fb84361c200" path="/var/lib/kubelet/pods/594190bd-cf27-4446-b5b9-7fb84361c200/volumes" Feb 19 06:10:11 crc kubenswrapper[5012]: I0219 06:10:11.217197 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 19 06:10:11 crc kubenswrapper[5012]: I0219 06:10:11.227056 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 19 06:10:12 crc kubenswrapper[5012]: I0219 06:10:12.194699 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.783622 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 06:10:30 crc kubenswrapper[5012]: E0219 06:10:30.786774 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="594190bd-cf27-4446-b5b9-7fb84361c200" containerName="extract-utilities" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.786904 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="594190bd-cf27-4446-b5b9-7fb84361c200" containerName="extract-utilities" Feb 19 06:10:30 crc kubenswrapper[5012]: E0219 06:10:30.787013 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="594190bd-cf27-4446-b5b9-7fb84361c200" containerName="extract-content" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.787085 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="594190bd-cf27-4446-b5b9-7fb84361c200" containerName="extract-content" Feb 19 06:10:30 crc kubenswrapper[5012]: E0219 06:10:30.787169 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="594190bd-cf27-4446-b5b9-7fb84361c200" containerName="registry-server" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.787253 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="594190bd-cf27-4446-b5b9-7fb84361c200" containerName="registry-server" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.787657 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="594190bd-cf27-4446-b5b9-7fb84361c200" containerName="registry-server" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.788723 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.792339 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-s2ths" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.792448 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.792519 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.792641 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.798647 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.936079 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.936744 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.936841 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8klk\" (UniqueName: \"kubernetes.io/projected/54eccb09-b3ec-45bc-8065-4c5eb9516257-kube-api-access-b8klk\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.937047 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.937112 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.937488 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.937531 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-config-data\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.937574 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:30 crc kubenswrapper[5012]: I0219 06:10:30.937609 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.039005 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.039058 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8klk\" (UniqueName: \"kubernetes.io/projected/54eccb09-b3ec-45bc-8065-4c5eb9516257-kube-api-access-b8klk\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.039493 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.039540 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.040516 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.040553 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-config-data\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.040587 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.040575 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.040611 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.040727 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.041517 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.041857 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.042584 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.044748 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-config-data\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.046035 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.050515 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.052699 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.058725 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8klk\" (UniqueName: \"kubernetes.io/projected/54eccb09-b3ec-45bc-8065-4c5eb9516257-kube-api-access-b8klk\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.080190 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.131761 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 06:10:31 crc kubenswrapper[5012]: I0219 06:10:31.691875 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 06:10:32 crc kubenswrapper[5012]: I0219 06:10:32.432059 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"54eccb09-b3ec-45bc-8065-4c5eb9516257","Type":"ContainerStarted","Data":"4d40402dd6566caf396779f17c2dbfad2df685b1f64caf3b6b294fc60c0aaaea"} Feb 19 06:10:47 crc kubenswrapper[5012]: I0219 06:10:47.593362 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"54eccb09-b3ec-45bc-8065-4c5eb9516257","Type":"ContainerStarted","Data":"45a71cb7a299afd86b43701046f8b7c089e907df4ed4d824464d2883ac4074ea"} Feb 19 06:10:47 crc kubenswrapper[5012]: I0219 06:10:47.620469 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.7256981700000003 podStartE2EDuration="18.620445688s" podCreationTimestamp="2026-02-19 06:10:29 +0000 UTC" firstStartedPulling="2026-02-19 06:10:31.704318669 +0000 UTC m=+2727.737641248" lastFinishedPulling="2026-02-19 06:10:46.599066157 +0000 UTC m=+2742.632388766" observedRunningTime="2026-02-19 06:10:47.613742534 +0000 UTC m=+2743.647065133" watchObservedRunningTime="2026-02-19 06:10:47.620445688 +0000 UTC m=+2743.653768287" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.663073 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mkgb5"] Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.665651 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.677777 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-utilities\") pod \"certified-operators-mkgb5\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.677959 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f5mm\" (UniqueName: \"kubernetes.io/projected/29d004a3-f380-4697-98f8-55fcb4d82038-kube-api-access-9f5mm\") pod \"certified-operators-mkgb5\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.678023 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-catalog-content\") pod \"certified-operators-mkgb5\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.695706 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mkgb5"] Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.780344 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f5mm\" (UniqueName: \"kubernetes.io/projected/29d004a3-f380-4697-98f8-55fcb4d82038-kube-api-access-9f5mm\") pod \"certified-operators-mkgb5\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.780436 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-catalog-content\") pod \"certified-operators-mkgb5\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.780538 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-utilities\") pod \"certified-operators-mkgb5\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.781094 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-utilities\") pod \"certified-operators-mkgb5\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.781174 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-catalog-content\") pod \"certified-operators-mkgb5\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.800693 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f5mm\" (UniqueName: \"kubernetes.io/projected/29d004a3-f380-4697-98f8-55fcb4d82038-kube-api-access-9f5mm\") pod \"certified-operators-mkgb5\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:04 crc kubenswrapper[5012]: I0219 06:12:04.988854 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:05 crc kubenswrapper[5012]: I0219 06:12:05.517236 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mkgb5"] Feb 19 06:12:06 crc kubenswrapper[5012]: I0219 06:12:06.491724 5012 generic.go:334] "Generic (PLEG): container finished" podID="29d004a3-f380-4697-98f8-55fcb4d82038" containerID="dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad" exitCode=0 Feb 19 06:12:06 crc kubenswrapper[5012]: I0219 06:12:06.491782 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkgb5" event={"ID":"29d004a3-f380-4697-98f8-55fcb4d82038","Type":"ContainerDied","Data":"dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad"} Feb 19 06:12:06 crc kubenswrapper[5012]: I0219 06:12:06.491839 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkgb5" event={"ID":"29d004a3-f380-4697-98f8-55fcb4d82038","Type":"ContainerStarted","Data":"9a8603d7a2167d5833dce35420e4665df5733ee3cbe5e1cf78479691ea6b660f"} Feb 19 06:12:06 crc kubenswrapper[5012]: I0219 06:12:06.494729 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 06:12:08 crc kubenswrapper[5012]: I0219 06:12:08.513373 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkgb5" event={"ID":"29d004a3-f380-4697-98f8-55fcb4d82038","Type":"ContainerStarted","Data":"c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d"} Feb 19 06:12:10 crc kubenswrapper[5012]: I0219 06:12:10.536901 5012 generic.go:334] "Generic (PLEG): container finished" podID="29d004a3-f380-4697-98f8-55fcb4d82038" containerID="c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d" exitCode=0 Feb 19 06:12:10 crc kubenswrapper[5012]: I0219 06:12:10.537002 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkgb5" event={"ID":"29d004a3-f380-4697-98f8-55fcb4d82038","Type":"ContainerDied","Data":"c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d"} Feb 19 06:12:11 crc kubenswrapper[5012]: I0219 06:12:11.551693 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkgb5" event={"ID":"29d004a3-f380-4697-98f8-55fcb4d82038","Type":"ContainerStarted","Data":"d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988"} Feb 19 06:12:11 crc kubenswrapper[5012]: I0219 06:12:11.574274 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mkgb5" podStartSLOduration=3.126061717 podStartE2EDuration="7.574260641s" podCreationTimestamp="2026-02-19 06:12:04 +0000 UTC" firstStartedPulling="2026-02-19 06:12:06.494206415 +0000 UTC m=+2822.527528994" lastFinishedPulling="2026-02-19 06:12:10.942405309 +0000 UTC m=+2826.975727918" observedRunningTime="2026-02-19 06:12:11.573965254 +0000 UTC m=+2827.607287863" watchObservedRunningTime="2026-02-19 06:12:11.574260641 +0000 UTC m=+2827.607583210" Feb 19 06:12:14 crc kubenswrapper[5012]: I0219 06:12:14.431043 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:12:14 crc kubenswrapper[5012]: I0219 06:12:14.431130 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:12:14 crc kubenswrapper[5012]: I0219 06:12:14.989213 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:14 crc kubenswrapper[5012]: I0219 06:12:14.989531 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:15 crc kubenswrapper[5012]: I0219 06:12:15.061793 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:25 crc kubenswrapper[5012]: I0219 06:12:25.063670 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:25 crc kubenswrapper[5012]: I0219 06:12:25.123497 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mkgb5"] Feb 19 06:12:25 crc kubenswrapper[5012]: I0219 06:12:25.725496 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mkgb5" podUID="29d004a3-f380-4697-98f8-55fcb4d82038" containerName="registry-server" containerID="cri-o://d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988" gracePeriod=2 Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.215248 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.370998 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f5mm\" (UniqueName: \"kubernetes.io/projected/29d004a3-f380-4697-98f8-55fcb4d82038-kube-api-access-9f5mm\") pod \"29d004a3-f380-4697-98f8-55fcb4d82038\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.371201 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-utilities\") pod \"29d004a3-f380-4697-98f8-55fcb4d82038\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.371345 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-catalog-content\") pod \"29d004a3-f380-4697-98f8-55fcb4d82038\" (UID: \"29d004a3-f380-4697-98f8-55fcb4d82038\") " Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.372185 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-utilities" (OuterVolumeSpecName: "utilities") pod "29d004a3-f380-4697-98f8-55fcb4d82038" (UID: "29d004a3-f380-4697-98f8-55fcb4d82038"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.385162 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29d004a3-f380-4697-98f8-55fcb4d82038-kube-api-access-9f5mm" (OuterVolumeSpecName: "kube-api-access-9f5mm") pod "29d004a3-f380-4697-98f8-55fcb4d82038" (UID: "29d004a3-f380-4697-98f8-55fcb4d82038"). InnerVolumeSpecName "kube-api-access-9f5mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.431199 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29d004a3-f380-4697-98f8-55fcb4d82038" (UID: "29d004a3-f380-4697-98f8-55fcb4d82038"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.473843 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.473875 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f5mm\" (UniqueName: \"kubernetes.io/projected/29d004a3-f380-4697-98f8-55fcb4d82038-kube-api-access-9f5mm\") on node \"crc\" DevicePath \"\"" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.473890 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d004a3-f380-4697-98f8-55fcb4d82038-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.740559 5012 generic.go:334] "Generic (PLEG): container finished" podID="29d004a3-f380-4697-98f8-55fcb4d82038" containerID="d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988" exitCode=0 Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.740780 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkgb5" event={"ID":"29d004a3-f380-4697-98f8-55fcb4d82038","Type":"ContainerDied","Data":"d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988"} Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.740908 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkgb5" event={"ID":"29d004a3-f380-4697-98f8-55fcb4d82038","Type":"ContainerDied","Data":"9a8603d7a2167d5833dce35420e4665df5733ee3cbe5e1cf78479691ea6b660f"} Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.740934 5012 scope.go:117] "RemoveContainer" containerID="d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.740958 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkgb5" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.778449 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mkgb5"] Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.783897 5012 scope.go:117] "RemoveContainer" containerID="c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.788354 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mkgb5"] Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.819086 5012 scope.go:117] "RemoveContainer" containerID="dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.866226 5012 scope.go:117] "RemoveContainer" containerID="d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988" Feb 19 06:12:26 crc kubenswrapper[5012]: E0219 06:12:26.866697 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988\": container with ID starting with d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988 not found: ID does not exist" containerID="d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.866777 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988"} err="failed to get container status \"d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988\": rpc error: code = NotFound desc = could not find container \"d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988\": container with ID starting with d00c8348f823de45adf3593e6eedaf637d157170d98b5fba91d3b412c478b988 not found: ID does not exist" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.866809 5012 scope.go:117] "RemoveContainer" containerID="c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d" Feb 19 06:12:26 crc kubenswrapper[5012]: E0219 06:12:26.867324 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d\": container with ID starting with c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d not found: ID does not exist" containerID="c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.867354 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d"} err="failed to get container status \"c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d\": rpc error: code = NotFound desc = could not find container \"c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d\": container with ID starting with c4378893cae5d8d59bc8a6b012da78a7283c5b06c70d7d2514330d4553d90e0d not found: ID does not exist" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.867376 5012 scope.go:117] "RemoveContainer" containerID="dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad" Feb 19 06:12:26 crc kubenswrapper[5012]: E0219 06:12:26.867654 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad\": container with ID starting with dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad not found: ID does not exist" containerID="dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad" Feb 19 06:12:26 crc kubenswrapper[5012]: I0219 06:12:26.867689 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad"} err="failed to get container status \"dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad\": rpc error: code = NotFound desc = could not find container \"dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad\": container with ID starting with dd38b8b4a335a9efd860b26d8a6c14c6da6526611fc91aff01c4c6b1c10d17ad not found: ID does not exist" Feb 19 06:12:28 crc kubenswrapper[5012]: I0219 06:12:28.722624 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29d004a3-f380-4697-98f8-55fcb4d82038" path="/var/lib/kubelet/pods/29d004a3-f380-4697-98f8-55fcb4d82038/volumes" Feb 19 06:12:44 crc kubenswrapper[5012]: I0219 06:12:44.431238 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:12:44 crc kubenswrapper[5012]: I0219 06:12:44.431896 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:13:14 crc kubenswrapper[5012]: I0219 06:13:14.430951 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:13:14 crc kubenswrapper[5012]: I0219 06:13:14.431620 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:13:14 crc kubenswrapper[5012]: I0219 06:13:14.431683 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:13:14 crc kubenswrapper[5012]: I0219 06:13:14.432458 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:13:14 crc kubenswrapper[5012]: I0219 06:13:14.432557 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" gracePeriod=600 Feb 19 06:13:14 crc kubenswrapper[5012]: E0219 06:13:14.557727 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:13:15 crc kubenswrapper[5012]: I0219 06:13:15.285909 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" exitCode=0 Feb 19 06:13:15 crc kubenswrapper[5012]: I0219 06:13:15.285987 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc"} Feb 19 06:13:15 crc kubenswrapper[5012]: I0219 06:13:15.286037 5012 scope.go:117] "RemoveContainer" containerID="8f0e2de409f869f343439fd788a0683b28b6e560ce8f601661640064fc2c4afc" Feb 19 06:13:15 crc kubenswrapper[5012]: I0219 06:13:15.288908 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:13:15 crc kubenswrapper[5012]: E0219 06:13:15.289548 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:13:29 crc kubenswrapper[5012]: I0219 06:13:29.703166 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:13:29 crc kubenswrapper[5012]: E0219 06:13:29.704209 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:13:41 crc kubenswrapper[5012]: I0219 06:13:41.703422 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:13:41 crc kubenswrapper[5012]: E0219 06:13:41.704945 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:13:52 crc kubenswrapper[5012]: I0219 06:13:52.704202 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:13:52 crc kubenswrapper[5012]: E0219 06:13:52.705368 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:14:05 crc kubenswrapper[5012]: I0219 06:14:05.703168 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:14:05 crc kubenswrapper[5012]: E0219 06:14:05.704276 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.376139 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bzsc4"] Feb 19 06:14:15 crc kubenswrapper[5012]: E0219 06:14:15.377291 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d004a3-f380-4697-98f8-55fcb4d82038" containerName="extract-content" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.377330 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d004a3-f380-4697-98f8-55fcb4d82038" containerName="extract-content" Feb 19 06:14:15 crc kubenswrapper[5012]: E0219 06:14:15.377357 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d004a3-f380-4697-98f8-55fcb4d82038" containerName="extract-utilities" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.377365 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d004a3-f380-4697-98f8-55fcb4d82038" containerName="extract-utilities" Feb 19 06:14:15 crc kubenswrapper[5012]: E0219 06:14:15.377389 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d004a3-f380-4697-98f8-55fcb4d82038" containerName="registry-server" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.377399 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d004a3-f380-4697-98f8-55fcb4d82038" containerName="registry-server" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.377624 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d004a3-f380-4697-98f8-55fcb4d82038" containerName="registry-server" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.379649 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.407763 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzsc4"] Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.495716 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2xdh\" (UniqueName: \"kubernetes.io/projected/ac664a0d-6329-4f30-b172-8251efffc897-kube-api-access-p2xdh\") pod \"redhat-marketplace-bzsc4\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.495862 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-utilities\") pod \"redhat-marketplace-bzsc4\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.495957 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-catalog-content\") pod \"redhat-marketplace-bzsc4\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.597911 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2xdh\" (UniqueName: \"kubernetes.io/projected/ac664a0d-6329-4f30-b172-8251efffc897-kube-api-access-p2xdh\") pod \"redhat-marketplace-bzsc4\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.598483 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-utilities\") pod \"redhat-marketplace-bzsc4\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.599075 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-utilities\") pod \"redhat-marketplace-bzsc4\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.599262 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-catalog-content\") pod \"redhat-marketplace-bzsc4\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.599664 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-catalog-content\") pod \"redhat-marketplace-bzsc4\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.619624 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2xdh\" (UniqueName: \"kubernetes.io/projected/ac664a0d-6329-4f30-b172-8251efffc897-kube-api-access-p2xdh\") pod \"redhat-marketplace-bzsc4\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:15 crc kubenswrapper[5012]: I0219 06:14:15.748784 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:16 crc kubenswrapper[5012]: I0219 06:14:16.220794 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzsc4"] Feb 19 06:14:17 crc kubenswrapper[5012]: I0219 06:14:17.094626 5012 generic.go:334] "Generic (PLEG): container finished" podID="ac664a0d-6329-4f30-b172-8251efffc897" containerID="fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a" exitCode=0 Feb 19 06:14:17 crc kubenswrapper[5012]: I0219 06:14:17.094735 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzsc4" event={"ID":"ac664a0d-6329-4f30-b172-8251efffc897","Type":"ContainerDied","Data":"fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a"} Feb 19 06:14:17 crc kubenswrapper[5012]: I0219 06:14:17.094994 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzsc4" event={"ID":"ac664a0d-6329-4f30-b172-8251efffc897","Type":"ContainerStarted","Data":"d4c09887c9bcdda41a18bc0f97b29786c4c22d3d1291fc9396998df058674684"} Feb 19 06:14:17 crc kubenswrapper[5012]: I0219 06:14:17.703442 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:14:17 crc kubenswrapper[5012]: E0219 06:14:17.704274 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:14:18 crc kubenswrapper[5012]: I0219 06:14:18.125617 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzsc4" event={"ID":"ac664a0d-6329-4f30-b172-8251efffc897","Type":"ContainerStarted","Data":"ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a"} Feb 19 06:14:19 crc kubenswrapper[5012]: I0219 06:14:19.142488 5012 generic.go:334] "Generic (PLEG): container finished" podID="ac664a0d-6329-4f30-b172-8251efffc897" containerID="ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a" exitCode=0 Feb 19 06:14:19 crc kubenswrapper[5012]: I0219 06:14:19.142745 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzsc4" event={"ID":"ac664a0d-6329-4f30-b172-8251efffc897","Type":"ContainerDied","Data":"ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a"} Feb 19 06:14:20 crc kubenswrapper[5012]: I0219 06:14:20.156273 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzsc4" event={"ID":"ac664a0d-6329-4f30-b172-8251efffc897","Type":"ContainerStarted","Data":"39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6"} Feb 19 06:14:20 crc kubenswrapper[5012]: I0219 06:14:20.181032 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bzsc4" podStartSLOduration=2.717482419 podStartE2EDuration="5.181004195s" podCreationTimestamp="2026-02-19 06:14:15 +0000 UTC" firstStartedPulling="2026-02-19 06:14:17.097744764 +0000 UTC m=+2953.131067373" lastFinishedPulling="2026-02-19 06:14:19.56126654 +0000 UTC m=+2955.594589149" observedRunningTime="2026-02-19 06:14:20.180631686 +0000 UTC m=+2956.213954265" watchObservedRunningTime="2026-02-19 06:14:20.181004195 +0000 UTC m=+2956.214326804" Feb 19 06:14:25 crc kubenswrapper[5012]: I0219 06:14:25.749370 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:25 crc kubenswrapper[5012]: I0219 06:14:25.750029 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:25 crc kubenswrapper[5012]: I0219 06:14:25.832703 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:26 crc kubenswrapper[5012]: I0219 06:14:26.306511 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:26 crc kubenswrapper[5012]: I0219 06:14:26.379677 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzsc4"] Feb 19 06:14:28 crc kubenswrapper[5012]: I0219 06:14:28.250195 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bzsc4" podUID="ac664a0d-6329-4f30-b172-8251efffc897" containerName="registry-server" containerID="cri-o://39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6" gracePeriod=2 Feb 19 06:14:28 crc kubenswrapper[5012]: I0219 06:14:28.900676 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.058339 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-utilities\") pod \"ac664a0d-6329-4f30-b172-8251efffc897\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.058460 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-catalog-content\") pod \"ac664a0d-6329-4f30-b172-8251efffc897\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.058493 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2xdh\" (UniqueName: \"kubernetes.io/projected/ac664a0d-6329-4f30-b172-8251efffc897-kube-api-access-p2xdh\") pod \"ac664a0d-6329-4f30-b172-8251efffc897\" (UID: \"ac664a0d-6329-4f30-b172-8251efffc897\") " Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.060201 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-utilities" (OuterVolumeSpecName: "utilities") pod "ac664a0d-6329-4f30-b172-8251efffc897" (UID: "ac664a0d-6329-4f30-b172-8251efffc897"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.067813 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac664a0d-6329-4f30-b172-8251efffc897-kube-api-access-p2xdh" (OuterVolumeSpecName: "kube-api-access-p2xdh") pod "ac664a0d-6329-4f30-b172-8251efffc897" (UID: "ac664a0d-6329-4f30-b172-8251efffc897"). InnerVolumeSpecName "kube-api-access-p2xdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.086221 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac664a0d-6329-4f30-b172-8251efffc897" (UID: "ac664a0d-6329-4f30-b172-8251efffc897"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.160409 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.160441 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac664a0d-6329-4f30-b172-8251efffc897-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.160451 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2xdh\" (UniqueName: \"kubernetes.io/projected/ac664a0d-6329-4f30-b172-8251efffc897-kube-api-access-p2xdh\") on node \"crc\" DevicePath \"\"" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.268822 5012 generic.go:334] "Generic (PLEG): container finished" podID="ac664a0d-6329-4f30-b172-8251efffc897" containerID="39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6" exitCode=0 Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.268871 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzsc4" event={"ID":"ac664a0d-6329-4f30-b172-8251efffc897","Type":"ContainerDied","Data":"39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6"} Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.268955 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bzsc4" event={"ID":"ac664a0d-6329-4f30-b172-8251efffc897","Type":"ContainerDied","Data":"d4c09887c9bcdda41a18bc0f97b29786c4c22d3d1291fc9396998df058674684"} Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.268994 5012 scope.go:117] "RemoveContainer" containerID="39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.271265 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bzsc4" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.300679 5012 scope.go:117] "RemoveContainer" containerID="ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.339076 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzsc4"] Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.353485 5012 scope.go:117] "RemoveContainer" containerID="fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.354825 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bzsc4"] Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.403735 5012 scope.go:117] "RemoveContainer" containerID="39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6" Feb 19 06:14:29 crc kubenswrapper[5012]: E0219 06:14:29.404332 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6\": container with ID starting with 39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6 not found: ID does not exist" containerID="39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.404393 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6"} err="failed to get container status \"39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6\": rpc error: code = NotFound desc = could not find container \"39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6\": container with ID starting with 39026c26b91b0ef0d146aea96cb19bd60bcb5ecaaf16c6576484041d68b647c6 not found: ID does not exist" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.404429 5012 scope.go:117] "RemoveContainer" containerID="ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a" Feb 19 06:14:29 crc kubenswrapper[5012]: E0219 06:14:29.404870 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a\": container with ID starting with ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a not found: ID does not exist" containerID="ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.404924 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a"} err="failed to get container status \"ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a\": rpc error: code = NotFound desc = could not find container \"ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a\": container with ID starting with ce73e39fa7297a47ad88525240bf4fa14897c6aa3246ff2d79e859ced449da6a not found: ID does not exist" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.404965 5012 scope.go:117] "RemoveContainer" containerID="fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a" Feb 19 06:14:29 crc kubenswrapper[5012]: E0219 06:14:29.405285 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a\": container with ID starting with fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a not found: ID does not exist" containerID="fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.405357 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a"} err="failed to get container status \"fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a\": rpc error: code = NotFound desc = could not find container \"fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a\": container with ID starting with fb8801b3ab01e846c2014bda353ce70e6e8886517f82acad95c7237fb1ba574a not found: ID does not exist" Feb 19 06:14:29 crc kubenswrapper[5012]: I0219 06:14:29.703087 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:14:29 crc kubenswrapper[5012]: E0219 06:14:29.703973 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:14:30 crc kubenswrapper[5012]: I0219 06:14:30.723189 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac664a0d-6329-4f30-b172-8251efffc897" path="/var/lib/kubelet/pods/ac664a0d-6329-4f30-b172-8251efffc897/volumes" Feb 19 06:14:44 crc kubenswrapper[5012]: I0219 06:14:44.715120 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:14:44 crc kubenswrapper[5012]: E0219 06:14:44.716192 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:14:57 crc kubenswrapper[5012]: I0219 06:14:57.703581 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:14:57 crc kubenswrapper[5012]: E0219 06:14:57.704770 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.161517 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt"] Feb 19 06:15:00 crc kubenswrapper[5012]: E0219 06:15:00.162792 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac664a0d-6329-4f30-b172-8251efffc897" containerName="registry-server" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.162818 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac664a0d-6329-4f30-b172-8251efffc897" containerName="registry-server" Feb 19 06:15:00 crc kubenswrapper[5012]: E0219 06:15:00.162856 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac664a0d-6329-4f30-b172-8251efffc897" containerName="extract-content" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.162865 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac664a0d-6329-4f30-b172-8251efffc897" containerName="extract-content" Feb 19 06:15:00 crc kubenswrapper[5012]: E0219 06:15:00.162890 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac664a0d-6329-4f30-b172-8251efffc897" containerName="extract-utilities" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.162900 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac664a0d-6329-4f30-b172-8251efffc897" containerName="extract-utilities" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.163247 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac664a0d-6329-4f30-b172-8251efffc897" containerName="registry-server" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.164490 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.167359 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.167937 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.191963 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt"] Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.277488 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d1557b7-91d6-4aac-8306-59d97142a76c-secret-volume\") pod \"collect-profiles-29524695-rb8tt\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.277728 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d1557b7-91d6-4aac-8306-59d97142a76c-config-volume\") pod \"collect-profiles-29524695-rb8tt\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.278038 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tnwc\" (UniqueName: \"kubernetes.io/projected/0d1557b7-91d6-4aac-8306-59d97142a76c-kube-api-access-4tnwc\") pod \"collect-profiles-29524695-rb8tt\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.379846 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d1557b7-91d6-4aac-8306-59d97142a76c-secret-volume\") pod \"collect-profiles-29524695-rb8tt\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.379973 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d1557b7-91d6-4aac-8306-59d97142a76c-config-volume\") pod \"collect-profiles-29524695-rb8tt\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.380034 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tnwc\" (UniqueName: \"kubernetes.io/projected/0d1557b7-91d6-4aac-8306-59d97142a76c-kube-api-access-4tnwc\") pod \"collect-profiles-29524695-rb8tt\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.380859 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d1557b7-91d6-4aac-8306-59d97142a76c-config-volume\") pod \"collect-profiles-29524695-rb8tt\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.387760 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d1557b7-91d6-4aac-8306-59d97142a76c-secret-volume\") pod \"collect-profiles-29524695-rb8tt\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.412968 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tnwc\" (UniqueName: \"kubernetes.io/projected/0d1557b7-91d6-4aac-8306-59d97142a76c-kube-api-access-4tnwc\") pod \"collect-profiles-29524695-rb8tt\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:00 crc kubenswrapper[5012]: I0219 06:15:00.497537 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:01 crc kubenswrapper[5012]: I0219 06:15:01.023250 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt"] Feb 19 06:15:01 crc kubenswrapper[5012]: I0219 06:15:01.665796 5012 generic.go:334] "Generic (PLEG): container finished" podID="0d1557b7-91d6-4aac-8306-59d97142a76c" containerID="1a2cb819f1490aeaeb6e29cd5e196789ce8e9978f4d9987b6edfc7cea46ee158" exitCode=0 Feb 19 06:15:01 crc kubenswrapper[5012]: I0219 06:15:01.665915 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" event={"ID":"0d1557b7-91d6-4aac-8306-59d97142a76c","Type":"ContainerDied","Data":"1a2cb819f1490aeaeb6e29cd5e196789ce8e9978f4d9987b6edfc7cea46ee158"} Feb 19 06:15:01 crc kubenswrapper[5012]: I0219 06:15:01.666129 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" event={"ID":"0d1557b7-91d6-4aac-8306-59d97142a76c","Type":"ContainerStarted","Data":"20d1ff5b35a75cf787a735e30d46894c93d5313a7ebc1210e95fe7fe46b5d694"} Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.129916 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.243951 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d1557b7-91d6-4aac-8306-59d97142a76c-secret-volume\") pod \"0d1557b7-91d6-4aac-8306-59d97142a76c\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.244084 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tnwc\" (UniqueName: \"kubernetes.io/projected/0d1557b7-91d6-4aac-8306-59d97142a76c-kube-api-access-4tnwc\") pod \"0d1557b7-91d6-4aac-8306-59d97142a76c\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.244176 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d1557b7-91d6-4aac-8306-59d97142a76c-config-volume\") pod \"0d1557b7-91d6-4aac-8306-59d97142a76c\" (UID: \"0d1557b7-91d6-4aac-8306-59d97142a76c\") " Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.245143 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d1557b7-91d6-4aac-8306-59d97142a76c-config-volume" (OuterVolumeSpecName: "config-volume") pod "0d1557b7-91d6-4aac-8306-59d97142a76c" (UID: "0d1557b7-91d6-4aac-8306-59d97142a76c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.249968 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d1557b7-91d6-4aac-8306-59d97142a76c-kube-api-access-4tnwc" (OuterVolumeSpecName: "kube-api-access-4tnwc") pod "0d1557b7-91d6-4aac-8306-59d97142a76c" (UID: "0d1557b7-91d6-4aac-8306-59d97142a76c"). InnerVolumeSpecName "kube-api-access-4tnwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.251332 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d1557b7-91d6-4aac-8306-59d97142a76c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0d1557b7-91d6-4aac-8306-59d97142a76c" (UID: "0d1557b7-91d6-4aac-8306-59d97142a76c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.346394 5012 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0d1557b7-91d6-4aac-8306-59d97142a76c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.346423 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tnwc\" (UniqueName: \"kubernetes.io/projected/0d1557b7-91d6-4aac-8306-59d97142a76c-kube-api-access-4tnwc\") on node \"crc\" DevicePath \"\"" Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.346432 5012 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d1557b7-91d6-4aac-8306-59d97142a76c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.701795 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" event={"ID":"0d1557b7-91d6-4aac-8306-59d97142a76c","Type":"ContainerDied","Data":"20d1ff5b35a75cf787a735e30d46894c93d5313a7ebc1210e95fe7fe46b5d694"} Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.702152 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20d1ff5b35a75cf787a735e30d46894c93d5313a7ebc1210e95fe7fe46b5d694" Feb 19 06:15:03 crc kubenswrapper[5012]: I0219 06:15:03.701833 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt" Feb 19 06:15:04 crc kubenswrapper[5012]: I0219 06:15:04.229920 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r"] Feb 19 06:15:04 crc kubenswrapper[5012]: I0219 06:15:04.238997 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524650-khs5r"] Feb 19 06:15:04 crc kubenswrapper[5012]: I0219 06:15:04.741287 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff63f713-7649-46d8-85cb-ef67dccf9fe6" path="/var/lib/kubelet/pods/ff63f713-7649-46d8-85cb-ef67dccf9fe6/volumes" Feb 19 06:15:10 crc kubenswrapper[5012]: I0219 06:15:10.704071 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:15:10 crc kubenswrapper[5012]: E0219 06:15:10.705480 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:15:25 crc kubenswrapper[5012]: I0219 06:15:25.703603 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:15:25 crc kubenswrapper[5012]: E0219 06:15:25.704604 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:15:37 crc kubenswrapper[5012]: I0219 06:15:37.702708 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:15:37 crc kubenswrapper[5012]: E0219 06:15:37.705071 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:15:38 crc kubenswrapper[5012]: I0219 06:15:38.416849 5012 scope.go:117] "RemoveContainer" containerID="c5d7329af46ea59d345e496a5c84f8c51fab010adcb4a91e0080f58a2ca4a9ec" Feb 19 06:15:51 crc kubenswrapper[5012]: I0219 06:15:51.703386 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:15:51 crc kubenswrapper[5012]: E0219 06:15:51.704427 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:16:04 crc kubenswrapper[5012]: I0219 06:16:04.717412 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:16:04 crc kubenswrapper[5012]: E0219 06:16:04.718973 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:16:19 crc kubenswrapper[5012]: I0219 06:16:19.702912 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:16:19 crc kubenswrapper[5012]: E0219 06:16:19.704146 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:16:32 crc kubenswrapper[5012]: I0219 06:16:32.704134 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:16:32 crc kubenswrapper[5012]: E0219 06:16:32.705212 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:16:44 crc kubenswrapper[5012]: I0219 06:16:44.711628 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:16:44 crc kubenswrapper[5012]: E0219 06:16:44.712732 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:16:56 crc kubenswrapper[5012]: I0219 06:16:56.702825 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:16:56 crc kubenswrapper[5012]: E0219 06:16:56.704000 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:17:07 crc kubenswrapper[5012]: I0219 06:17:07.703860 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:17:07 crc kubenswrapper[5012]: E0219 06:17:07.704546 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:17:20 crc kubenswrapper[5012]: I0219 06:17:20.703588 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:17:20 crc kubenswrapper[5012]: E0219 06:17:20.704956 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.374778 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cgg6f"] Feb 19 06:17:29 crc kubenswrapper[5012]: E0219 06:17:29.375817 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d1557b7-91d6-4aac-8306-59d97142a76c" containerName="collect-profiles" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.375832 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d1557b7-91d6-4aac-8306-59d97142a76c" containerName="collect-profiles" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.376061 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d1557b7-91d6-4aac-8306-59d97142a76c" containerName="collect-profiles" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.377808 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.395612 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cgg6f"] Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.560394 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-utilities\") pod \"community-operators-cgg6f\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.560778 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgbd6\" (UniqueName: \"kubernetes.io/projected/d97998a6-419f-4f4a-b313-942320f12a6b-kube-api-access-kgbd6\") pod \"community-operators-cgg6f\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.560958 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-catalog-content\") pod \"community-operators-cgg6f\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.662595 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-catalog-content\") pod \"community-operators-cgg6f\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.662809 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-utilities\") pod \"community-operators-cgg6f\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.662883 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgbd6\" (UniqueName: \"kubernetes.io/projected/d97998a6-419f-4f4a-b313-942320f12a6b-kube-api-access-kgbd6\") pod \"community-operators-cgg6f\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.664141 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-catalog-content\") pod \"community-operators-cgg6f\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.664658 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-utilities\") pod \"community-operators-cgg6f\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.699110 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgbd6\" (UniqueName: \"kubernetes.io/projected/d97998a6-419f-4f4a-b313-942320f12a6b-kube-api-access-kgbd6\") pod \"community-operators-cgg6f\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:29 crc kubenswrapper[5012]: I0219 06:17:29.722285 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:30 crc kubenswrapper[5012]: I0219 06:17:30.242695 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cgg6f"] Feb 19 06:17:30 crc kubenswrapper[5012]: I0219 06:17:30.347122 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgg6f" event={"ID":"d97998a6-419f-4f4a-b313-942320f12a6b","Type":"ContainerStarted","Data":"dd7c0c797895ea4cd89489cdd87a27a03c805333e76ec09a18a354bd79977d27"} Feb 19 06:17:31 crc kubenswrapper[5012]: I0219 06:17:31.358065 5012 generic.go:334] "Generic (PLEG): container finished" podID="d97998a6-419f-4f4a-b313-942320f12a6b" containerID="cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2" exitCode=0 Feb 19 06:17:31 crc kubenswrapper[5012]: I0219 06:17:31.358114 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgg6f" event={"ID":"d97998a6-419f-4f4a-b313-942320f12a6b","Type":"ContainerDied","Data":"cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2"} Feb 19 06:17:31 crc kubenswrapper[5012]: I0219 06:17:31.361937 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 06:17:31 crc kubenswrapper[5012]: I0219 06:17:31.703384 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:17:31 crc kubenswrapper[5012]: E0219 06:17:31.703752 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:17:33 crc kubenswrapper[5012]: I0219 06:17:33.387011 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgg6f" event={"ID":"d97998a6-419f-4f4a-b313-942320f12a6b","Type":"ContainerStarted","Data":"6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27"} Feb 19 06:17:34 crc kubenswrapper[5012]: I0219 06:17:34.400758 5012 generic.go:334] "Generic (PLEG): container finished" podID="d97998a6-419f-4f4a-b313-942320f12a6b" containerID="6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27" exitCode=0 Feb 19 06:17:34 crc kubenswrapper[5012]: I0219 06:17:34.400823 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgg6f" event={"ID":"d97998a6-419f-4f4a-b313-942320f12a6b","Type":"ContainerDied","Data":"6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27"} Feb 19 06:17:35 crc kubenswrapper[5012]: I0219 06:17:35.412664 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgg6f" event={"ID":"d97998a6-419f-4f4a-b313-942320f12a6b","Type":"ContainerStarted","Data":"c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef"} Feb 19 06:17:39 crc kubenswrapper[5012]: I0219 06:17:39.723462 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:39 crc kubenswrapper[5012]: I0219 06:17:39.724724 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:39 crc kubenswrapper[5012]: I0219 06:17:39.805065 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:39 crc kubenswrapper[5012]: I0219 06:17:39.844701 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cgg6f" podStartSLOduration=7.415747155 podStartE2EDuration="10.84467882s" podCreationTimestamp="2026-02-19 06:17:29 +0000 UTC" firstStartedPulling="2026-02-19 06:17:31.361326576 +0000 UTC m=+3147.394649175" lastFinishedPulling="2026-02-19 06:17:34.790258231 +0000 UTC m=+3150.823580840" observedRunningTime="2026-02-19 06:17:35.446427219 +0000 UTC m=+3151.479749818" watchObservedRunningTime="2026-02-19 06:17:39.84467882 +0000 UTC m=+3155.878001399" Feb 19 06:17:40 crc kubenswrapper[5012]: I0219 06:17:40.521220 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:40 crc kubenswrapper[5012]: I0219 06:17:40.585490 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cgg6f"] Feb 19 06:17:42 crc kubenswrapper[5012]: I0219 06:17:42.486566 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cgg6f" podUID="d97998a6-419f-4f4a-b313-942320f12a6b" containerName="registry-server" containerID="cri-o://c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef" gracePeriod=2 Feb 19 06:17:42 crc kubenswrapper[5012]: I0219 06:17:42.703134 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:17:42 crc kubenswrapper[5012]: E0219 06:17:42.703939 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.050709 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.165460 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-utilities\") pod \"d97998a6-419f-4f4a-b313-942320f12a6b\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.165746 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-catalog-content\") pod \"d97998a6-419f-4f4a-b313-942320f12a6b\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.165783 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgbd6\" (UniqueName: \"kubernetes.io/projected/d97998a6-419f-4f4a-b313-942320f12a6b-kube-api-access-kgbd6\") pod \"d97998a6-419f-4f4a-b313-942320f12a6b\" (UID: \"d97998a6-419f-4f4a-b313-942320f12a6b\") " Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.166530 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-utilities" (OuterVolumeSpecName: "utilities") pod "d97998a6-419f-4f4a-b313-942320f12a6b" (UID: "d97998a6-419f-4f4a-b313-942320f12a6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.171243 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d97998a6-419f-4f4a-b313-942320f12a6b-kube-api-access-kgbd6" (OuterVolumeSpecName: "kube-api-access-kgbd6") pod "d97998a6-419f-4f4a-b313-942320f12a6b" (UID: "d97998a6-419f-4f4a-b313-942320f12a6b"). InnerVolumeSpecName "kube-api-access-kgbd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.221726 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d97998a6-419f-4f4a-b313-942320f12a6b" (UID: "d97998a6-419f-4f4a-b313-942320f12a6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.267955 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.267994 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d97998a6-419f-4f4a-b313-942320f12a6b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.268008 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgbd6\" (UniqueName: \"kubernetes.io/projected/d97998a6-419f-4f4a-b313-942320f12a6b-kube-api-access-kgbd6\") on node \"crc\" DevicePath \"\"" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.501318 5012 generic.go:334] "Generic (PLEG): container finished" podID="d97998a6-419f-4f4a-b313-942320f12a6b" containerID="c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef" exitCode=0 Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.501355 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgg6f" event={"ID":"d97998a6-419f-4f4a-b313-942320f12a6b","Type":"ContainerDied","Data":"c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef"} Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.501381 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgg6f" event={"ID":"d97998a6-419f-4f4a-b313-942320f12a6b","Type":"ContainerDied","Data":"dd7c0c797895ea4cd89489cdd87a27a03c805333e76ec09a18a354bd79977d27"} Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.501399 5012 scope.go:117] "RemoveContainer" containerID="c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.501429 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgg6f" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.540971 5012 scope.go:117] "RemoveContainer" containerID="6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.562488 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cgg6f"] Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.575058 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cgg6f"] Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.594810 5012 scope.go:117] "RemoveContainer" containerID="cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.631965 5012 scope.go:117] "RemoveContainer" containerID="c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef" Feb 19 06:17:43 crc kubenswrapper[5012]: E0219 06:17:43.632378 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef\": container with ID starting with c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef not found: ID does not exist" containerID="c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.632425 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef"} err="failed to get container status \"c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef\": rpc error: code = NotFound desc = could not find container \"c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef\": container with ID starting with c829367809d530efb0acb56019f73b07b791a9f9b8d9dc604ce5310a079357ef not found: ID does not exist" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.632459 5012 scope.go:117] "RemoveContainer" containerID="6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27" Feb 19 06:17:43 crc kubenswrapper[5012]: E0219 06:17:43.632719 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27\": container with ID starting with 6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27 not found: ID does not exist" containerID="6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.632761 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27"} err="failed to get container status \"6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27\": rpc error: code = NotFound desc = could not find container \"6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27\": container with ID starting with 6b56e07978397d45634dc3dece831e85b2eb9fd60d788a69ee4fa114020f7c27 not found: ID does not exist" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.632790 5012 scope.go:117] "RemoveContainer" containerID="cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2" Feb 19 06:17:43 crc kubenswrapper[5012]: E0219 06:17:43.632999 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2\": container with ID starting with cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2 not found: ID does not exist" containerID="cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2" Feb 19 06:17:43 crc kubenswrapper[5012]: I0219 06:17:43.633029 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2"} err="failed to get container status \"cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2\": rpc error: code = NotFound desc = could not find container \"cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2\": container with ID starting with cb280da6cc2794baf2f8068f7b05a767256b34fe16b881ad81e5814ea69d84c2 not found: ID does not exist" Feb 19 06:17:44 crc kubenswrapper[5012]: I0219 06:17:44.722034 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d97998a6-419f-4f4a-b313-942320f12a6b" path="/var/lib/kubelet/pods/d97998a6-419f-4f4a-b313-942320f12a6b/volumes" Feb 19 06:17:55 crc kubenswrapper[5012]: I0219 06:17:55.704006 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:17:55 crc kubenswrapper[5012]: E0219 06:17:55.705512 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:18:10 crc kubenswrapper[5012]: I0219 06:18:10.703377 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:18:10 crc kubenswrapper[5012]: E0219 06:18:10.704463 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:18:23 crc kubenswrapper[5012]: I0219 06:18:23.711629 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:18:23 crc kubenswrapper[5012]: I0219 06:18:23.975879 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"259a14333e76f5ec2c151bbd818fe48cadcca6e9989e78b8167dd4e34241e536"} Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.701333 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4fxxm"] Feb 19 06:19:41 crc kubenswrapper[5012]: E0219 06:19:41.702293 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97998a6-419f-4f4a-b313-942320f12a6b" containerName="registry-server" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.702323 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97998a6-419f-4f4a-b313-942320f12a6b" containerName="registry-server" Feb 19 06:19:41 crc kubenswrapper[5012]: E0219 06:19:41.702357 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97998a6-419f-4f4a-b313-942320f12a6b" containerName="extract-content" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.702367 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97998a6-419f-4f4a-b313-942320f12a6b" containerName="extract-content" Feb 19 06:19:41 crc kubenswrapper[5012]: E0219 06:19:41.702398 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97998a6-419f-4f4a-b313-942320f12a6b" containerName="extract-utilities" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.702410 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97998a6-419f-4f4a-b313-942320f12a6b" containerName="extract-utilities" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.702669 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d97998a6-419f-4f4a-b313-942320f12a6b" containerName="registry-server" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.704689 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.715009 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4fxxm"] Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.777699 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-utilities\") pod \"redhat-operators-4fxxm\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.777818 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-catalog-content\") pod \"redhat-operators-4fxxm\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.777893 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkmw8\" (UniqueName: \"kubernetes.io/projected/55648b88-ee33-485a-9b58-46b433d1397d-kube-api-access-gkmw8\") pod \"redhat-operators-4fxxm\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.879612 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-utilities\") pod \"redhat-operators-4fxxm\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.879898 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-catalog-content\") pod \"redhat-operators-4fxxm\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.879956 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkmw8\" (UniqueName: \"kubernetes.io/projected/55648b88-ee33-485a-9b58-46b433d1397d-kube-api-access-gkmw8\") pod \"redhat-operators-4fxxm\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.880801 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-utilities\") pod \"redhat-operators-4fxxm\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.881014 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-catalog-content\") pod \"redhat-operators-4fxxm\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:41 crc kubenswrapper[5012]: I0219 06:19:41.904478 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkmw8\" (UniqueName: \"kubernetes.io/projected/55648b88-ee33-485a-9b58-46b433d1397d-kube-api-access-gkmw8\") pod \"redhat-operators-4fxxm\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:42 crc kubenswrapper[5012]: I0219 06:19:42.084805 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:19:42 crc kubenswrapper[5012]: I0219 06:19:42.601758 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4fxxm"] Feb 19 06:19:42 crc kubenswrapper[5012]: I0219 06:19:42.925142 5012 generic.go:334] "Generic (PLEG): container finished" podID="55648b88-ee33-485a-9b58-46b433d1397d" containerID="b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b" exitCode=0 Feb 19 06:19:42 crc kubenswrapper[5012]: I0219 06:19:42.925194 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fxxm" event={"ID":"55648b88-ee33-485a-9b58-46b433d1397d","Type":"ContainerDied","Data":"b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b"} Feb 19 06:19:42 crc kubenswrapper[5012]: I0219 06:19:42.926346 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fxxm" event={"ID":"55648b88-ee33-485a-9b58-46b433d1397d","Type":"ContainerStarted","Data":"5255dfd3a95ea83eee361c3aec9395da3fabe1c150f3519c2b4f0f0a37e4b8e0"} Feb 19 06:19:44 crc kubenswrapper[5012]: I0219 06:19:44.950935 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fxxm" event={"ID":"55648b88-ee33-485a-9b58-46b433d1397d","Type":"ContainerStarted","Data":"8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18"} Feb 19 06:19:52 crc kubenswrapper[5012]: I0219 06:19:52.050069 5012 generic.go:334] "Generic (PLEG): container finished" podID="55648b88-ee33-485a-9b58-46b433d1397d" containerID="8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18" exitCode=0 Feb 19 06:19:52 crc kubenswrapper[5012]: I0219 06:19:52.050202 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fxxm" event={"ID":"55648b88-ee33-485a-9b58-46b433d1397d","Type":"ContainerDied","Data":"8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18"} Feb 19 06:19:53 crc kubenswrapper[5012]: I0219 06:19:53.067939 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fxxm" event={"ID":"55648b88-ee33-485a-9b58-46b433d1397d","Type":"ContainerStarted","Data":"852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94"} Feb 19 06:20:02 crc kubenswrapper[5012]: I0219 06:20:02.085959 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:20:02 crc kubenswrapper[5012]: I0219 06:20:02.086821 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:20:03 crc kubenswrapper[5012]: I0219 06:20:03.141440 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4fxxm" podUID="55648b88-ee33-485a-9b58-46b433d1397d" containerName="registry-server" probeResult="failure" output=< Feb 19 06:20:03 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 06:20:03 crc kubenswrapper[5012]: > Feb 19 06:20:13 crc kubenswrapper[5012]: I0219 06:20:13.139573 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4fxxm" podUID="55648b88-ee33-485a-9b58-46b433d1397d" containerName="registry-server" probeResult="failure" output=< Feb 19 06:20:13 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 06:20:13 crc kubenswrapper[5012]: > Feb 19 06:20:22 crc kubenswrapper[5012]: I0219 06:20:22.164229 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:20:22 crc kubenswrapper[5012]: I0219 06:20:22.193586 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4fxxm" podStartSLOduration=31.527685488 podStartE2EDuration="41.193564978s" podCreationTimestamp="2026-02-19 06:19:41 +0000 UTC" firstStartedPulling="2026-02-19 06:19:42.926755179 +0000 UTC m=+3278.960077748" lastFinishedPulling="2026-02-19 06:19:52.592634659 +0000 UTC m=+3288.625957238" observedRunningTime="2026-02-19 06:19:53.099394838 +0000 UTC m=+3289.132717447" watchObservedRunningTime="2026-02-19 06:20:22.193564978 +0000 UTC m=+3318.226887547" Feb 19 06:20:22 crc kubenswrapper[5012]: I0219 06:20:22.233648 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:20:22 crc kubenswrapper[5012]: I0219 06:20:22.414408 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4fxxm"] Feb 19 06:20:23 crc kubenswrapper[5012]: I0219 06:20:23.371219 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4fxxm" podUID="55648b88-ee33-485a-9b58-46b433d1397d" containerName="registry-server" containerID="cri-o://852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94" gracePeriod=2 Feb 19 06:20:23 crc kubenswrapper[5012]: I0219 06:20:23.910202 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.047130 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-catalog-content\") pod \"55648b88-ee33-485a-9b58-46b433d1397d\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.047251 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkmw8\" (UniqueName: \"kubernetes.io/projected/55648b88-ee33-485a-9b58-46b433d1397d-kube-api-access-gkmw8\") pod \"55648b88-ee33-485a-9b58-46b433d1397d\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.047567 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-utilities\") pod \"55648b88-ee33-485a-9b58-46b433d1397d\" (UID: \"55648b88-ee33-485a-9b58-46b433d1397d\") " Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.048404 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-utilities" (OuterVolumeSpecName: "utilities") pod "55648b88-ee33-485a-9b58-46b433d1397d" (UID: "55648b88-ee33-485a-9b58-46b433d1397d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.048678 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.059481 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55648b88-ee33-485a-9b58-46b433d1397d-kube-api-access-gkmw8" (OuterVolumeSpecName: "kube-api-access-gkmw8") pod "55648b88-ee33-485a-9b58-46b433d1397d" (UID: "55648b88-ee33-485a-9b58-46b433d1397d"). InnerVolumeSpecName "kube-api-access-gkmw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.157476 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkmw8\" (UniqueName: \"kubernetes.io/projected/55648b88-ee33-485a-9b58-46b433d1397d-kube-api-access-gkmw8\") on node \"crc\" DevicePath \"\"" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.225135 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55648b88-ee33-485a-9b58-46b433d1397d" (UID: "55648b88-ee33-485a-9b58-46b433d1397d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.261618 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55648b88-ee33-485a-9b58-46b433d1397d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.383872 5012 generic.go:334] "Generic (PLEG): container finished" podID="55648b88-ee33-485a-9b58-46b433d1397d" containerID="852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94" exitCode=0 Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.383917 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fxxm" event={"ID":"55648b88-ee33-485a-9b58-46b433d1397d","Type":"ContainerDied","Data":"852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94"} Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.383946 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fxxm" event={"ID":"55648b88-ee33-485a-9b58-46b433d1397d","Type":"ContainerDied","Data":"5255dfd3a95ea83eee361c3aec9395da3fabe1c150f3519c2b4f0f0a37e4b8e0"} Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.383961 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fxxm" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.383970 5012 scope.go:117] "RemoveContainer" containerID="852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.429058 5012 scope.go:117] "RemoveContainer" containerID="8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.448018 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4fxxm"] Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.455822 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4fxxm"] Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.473791 5012 scope.go:117] "RemoveContainer" containerID="b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.547031 5012 scope.go:117] "RemoveContainer" containerID="852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94" Feb 19 06:20:24 crc kubenswrapper[5012]: E0219 06:20:24.547537 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94\": container with ID starting with 852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94 not found: ID does not exist" containerID="852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.547598 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94"} err="failed to get container status \"852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94\": rpc error: code = NotFound desc = could not find container \"852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94\": container with ID starting with 852b81aad8b09d060ac787c269c2a29e49e2a071f50b3d5a926d784ba0885c94 not found: ID does not exist" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.547629 5012 scope.go:117] "RemoveContainer" containerID="8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18" Feb 19 06:20:24 crc kubenswrapper[5012]: E0219 06:20:24.547922 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18\": container with ID starting with 8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18 not found: ID does not exist" containerID="8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.547962 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18"} err="failed to get container status \"8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18\": rpc error: code = NotFound desc = could not find container \"8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18\": container with ID starting with 8c324587f6d290ff5aeb40679eda1eb3be7c86bc1323576834b953cf96bceb18 not found: ID does not exist" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.547989 5012 scope.go:117] "RemoveContainer" containerID="b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b" Feb 19 06:20:24 crc kubenswrapper[5012]: E0219 06:20:24.548324 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b\": container with ID starting with b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b not found: ID does not exist" containerID="b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.548359 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b"} err="failed to get container status \"b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b\": rpc error: code = NotFound desc = could not find container \"b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b\": container with ID starting with b13992cd382003d68ea693f47a209304b328ec12d1aa1b74ce417a840193e36b not found: ID does not exist" Feb 19 06:20:24 crc kubenswrapper[5012]: I0219 06:20:24.712174 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55648b88-ee33-485a-9b58-46b433d1397d" path="/var/lib/kubelet/pods/55648b88-ee33-485a-9b58-46b433d1397d/volumes" Feb 19 06:20:44 crc kubenswrapper[5012]: I0219 06:20:44.430505 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:20:44 crc kubenswrapper[5012]: I0219 06:20:44.431423 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:21:14 crc kubenswrapper[5012]: I0219 06:21:14.430561 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:21:14 crc kubenswrapper[5012]: I0219 06:21:14.431202 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:21:44 crc kubenswrapper[5012]: I0219 06:21:44.431144 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:21:44 crc kubenswrapper[5012]: I0219 06:21:44.432068 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:21:44 crc kubenswrapper[5012]: I0219 06:21:44.432164 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:21:44 crc kubenswrapper[5012]: I0219 06:21:44.433866 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"259a14333e76f5ec2c151bbd818fe48cadcca6e9989e78b8167dd4e34241e536"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:21:44 crc kubenswrapper[5012]: I0219 06:21:44.433976 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://259a14333e76f5ec2c151bbd818fe48cadcca6e9989e78b8167dd4e34241e536" gracePeriod=600 Feb 19 06:21:45 crc kubenswrapper[5012]: I0219 06:21:45.349474 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="259a14333e76f5ec2c151bbd818fe48cadcca6e9989e78b8167dd4e34241e536" exitCode=0 Feb 19 06:21:45 crc kubenswrapper[5012]: I0219 06:21:45.349528 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"259a14333e76f5ec2c151bbd818fe48cadcca6e9989e78b8167dd4e34241e536"} Feb 19 06:21:45 crc kubenswrapper[5012]: I0219 06:21:45.349908 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa"} Feb 19 06:21:45 crc kubenswrapper[5012]: I0219 06:21:45.349937 5012 scope.go:117] "RemoveContainer" containerID="ebb36af93b2ae9533e381c8102c7bee38ee141afa1825433564bb7fb9c8b87fc" Feb 19 06:23:44 crc kubenswrapper[5012]: I0219 06:23:44.430703 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:23:44 crc kubenswrapper[5012]: I0219 06:23:44.431400 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.451743 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v5z2x"] Feb 19 06:23:46 crc kubenswrapper[5012]: E0219 06:23:46.452796 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55648b88-ee33-485a-9b58-46b433d1397d" containerName="extract-content" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.452818 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="55648b88-ee33-485a-9b58-46b433d1397d" containerName="extract-content" Feb 19 06:23:46 crc kubenswrapper[5012]: E0219 06:23:46.452869 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55648b88-ee33-485a-9b58-46b433d1397d" containerName="extract-utilities" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.452884 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="55648b88-ee33-485a-9b58-46b433d1397d" containerName="extract-utilities" Feb 19 06:23:46 crc kubenswrapper[5012]: E0219 06:23:46.452914 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55648b88-ee33-485a-9b58-46b433d1397d" containerName="registry-server" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.452932 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="55648b88-ee33-485a-9b58-46b433d1397d" containerName="registry-server" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.453344 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="55648b88-ee33-485a-9b58-46b433d1397d" containerName="registry-server" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.456100 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.488600 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v5z2x"] Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.638258 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j82b\" (UniqueName: \"kubernetes.io/projected/c47a2602-b592-46cf-8452-4c06ba540a3f-kube-api-access-9j82b\") pod \"certified-operators-v5z2x\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.638673 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-utilities\") pod \"certified-operators-v5z2x\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.638722 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-catalog-content\") pod \"certified-operators-v5z2x\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.741559 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-catalog-content\") pod \"certified-operators-v5z2x\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.742122 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-catalog-content\") pod \"certified-operators-v5z2x\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.742387 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j82b\" (UniqueName: \"kubernetes.io/projected/c47a2602-b592-46cf-8452-4c06ba540a3f-kube-api-access-9j82b\") pod \"certified-operators-v5z2x\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.742639 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-utilities\") pod \"certified-operators-v5z2x\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.743000 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-utilities\") pod \"certified-operators-v5z2x\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.766193 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j82b\" (UniqueName: \"kubernetes.io/projected/c47a2602-b592-46cf-8452-4c06ba540a3f-kube-api-access-9j82b\") pod \"certified-operators-v5z2x\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:46 crc kubenswrapper[5012]: I0219 06:23:46.791440 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:47 crc kubenswrapper[5012]: I0219 06:23:47.321705 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v5z2x"] Feb 19 06:23:47 crc kubenswrapper[5012]: I0219 06:23:47.779633 5012 generic.go:334] "Generic (PLEG): container finished" podID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerID="d7e3c8bf7e3b50487cf7e0e61bc3377a127d4db21f72dced292a5d54263fa4ee" exitCode=0 Feb 19 06:23:47 crc kubenswrapper[5012]: I0219 06:23:47.779722 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5z2x" event={"ID":"c47a2602-b592-46cf-8452-4c06ba540a3f","Type":"ContainerDied","Data":"d7e3c8bf7e3b50487cf7e0e61bc3377a127d4db21f72dced292a5d54263fa4ee"} Feb 19 06:23:47 crc kubenswrapper[5012]: I0219 06:23:47.780019 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5z2x" event={"ID":"c47a2602-b592-46cf-8452-4c06ba540a3f","Type":"ContainerStarted","Data":"fdd34cfd7a4af7091fd720c395760be22a35a636a0da0a4e96e39f9a61ed5980"} Feb 19 06:23:47 crc kubenswrapper[5012]: I0219 06:23:47.781624 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 06:23:49 crc kubenswrapper[5012]: I0219 06:23:49.802254 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5z2x" event={"ID":"c47a2602-b592-46cf-8452-4c06ba540a3f","Type":"ContainerStarted","Data":"6b96ddbcd9517469ebb56cd58f965d25623177e6b664a50e79bcb5defaa92b3f"} Feb 19 06:23:50 crc kubenswrapper[5012]: I0219 06:23:50.821840 5012 generic.go:334] "Generic (PLEG): container finished" podID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerID="6b96ddbcd9517469ebb56cd58f965d25623177e6b664a50e79bcb5defaa92b3f" exitCode=0 Feb 19 06:23:50 crc kubenswrapper[5012]: I0219 06:23:50.822207 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5z2x" event={"ID":"c47a2602-b592-46cf-8452-4c06ba540a3f","Type":"ContainerDied","Data":"6b96ddbcd9517469ebb56cd58f965d25623177e6b664a50e79bcb5defaa92b3f"} Feb 19 06:23:51 crc kubenswrapper[5012]: I0219 06:23:51.867895 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5z2x" event={"ID":"c47a2602-b592-46cf-8452-4c06ba540a3f","Type":"ContainerStarted","Data":"8b0e34b2146720d77c42585f15b129821d21aeac992210df13e5567a64790406"} Feb 19 06:23:51 crc kubenswrapper[5012]: I0219 06:23:51.893469 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v5z2x" podStartSLOduration=2.211185039 podStartE2EDuration="5.893450927s" podCreationTimestamp="2026-02-19 06:23:46 +0000 UTC" firstStartedPulling="2026-02-19 06:23:47.781345633 +0000 UTC m=+3523.814668202" lastFinishedPulling="2026-02-19 06:23:51.463611521 +0000 UTC m=+3527.496934090" observedRunningTime="2026-02-19 06:23:51.893246782 +0000 UTC m=+3527.926569351" watchObservedRunningTime="2026-02-19 06:23:51.893450927 +0000 UTC m=+3527.926773496" Feb 19 06:23:56 crc kubenswrapper[5012]: I0219 06:23:56.791972 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:56 crc kubenswrapper[5012]: I0219 06:23:56.792500 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:56 crc kubenswrapper[5012]: I0219 06:23:56.844527 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:56 crc kubenswrapper[5012]: I0219 06:23:56.958874 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:23:59 crc kubenswrapper[5012]: I0219 06:23:59.642641 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v5z2x"] Feb 19 06:23:59 crc kubenswrapper[5012]: I0219 06:23:59.643281 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v5z2x" podUID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerName="registry-server" containerID="cri-o://8b0e34b2146720d77c42585f15b129821d21aeac992210df13e5567a64790406" gracePeriod=2 Feb 19 06:23:59 crc kubenswrapper[5012]: I0219 06:23:59.970918 5012 generic.go:334] "Generic (PLEG): container finished" podID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerID="8b0e34b2146720d77c42585f15b129821d21aeac992210df13e5567a64790406" exitCode=0 Feb 19 06:23:59 crc kubenswrapper[5012]: I0219 06:23:59.970968 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5z2x" event={"ID":"c47a2602-b592-46cf-8452-4c06ba540a3f","Type":"ContainerDied","Data":"8b0e34b2146720d77c42585f15b129821d21aeac992210df13e5567a64790406"} Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.218885 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.267358 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-catalog-content\") pod \"c47a2602-b592-46cf-8452-4c06ba540a3f\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.267647 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j82b\" (UniqueName: \"kubernetes.io/projected/c47a2602-b592-46cf-8452-4c06ba540a3f-kube-api-access-9j82b\") pod \"c47a2602-b592-46cf-8452-4c06ba540a3f\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.267717 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-utilities\") pod \"c47a2602-b592-46cf-8452-4c06ba540a3f\" (UID: \"c47a2602-b592-46cf-8452-4c06ba540a3f\") " Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.268940 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-utilities" (OuterVolumeSpecName: "utilities") pod "c47a2602-b592-46cf-8452-4c06ba540a3f" (UID: "c47a2602-b592-46cf-8452-4c06ba540a3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.274988 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c47a2602-b592-46cf-8452-4c06ba540a3f-kube-api-access-9j82b" (OuterVolumeSpecName: "kube-api-access-9j82b") pod "c47a2602-b592-46cf-8452-4c06ba540a3f" (UID: "c47a2602-b592-46cf-8452-4c06ba540a3f"). InnerVolumeSpecName "kube-api-access-9j82b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.313580 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c47a2602-b592-46cf-8452-4c06ba540a3f" (UID: "c47a2602-b592-46cf-8452-4c06ba540a3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.370669 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j82b\" (UniqueName: \"kubernetes.io/projected/c47a2602-b592-46cf-8452-4c06ba540a3f-kube-api-access-9j82b\") on node \"crc\" DevicePath \"\"" Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.370715 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.370728 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c47a2602-b592-46cf-8452-4c06ba540a3f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.987135 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5z2x" event={"ID":"c47a2602-b592-46cf-8452-4c06ba540a3f","Type":"ContainerDied","Data":"fdd34cfd7a4af7091fd720c395760be22a35a636a0da0a4e96e39f9a61ed5980"} Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.987575 5012 scope.go:117] "RemoveContainer" containerID="8b0e34b2146720d77c42585f15b129821d21aeac992210df13e5567a64790406" Feb 19 06:24:00 crc kubenswrapper[5012]: I0219 06:24:00.987217 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5z2x" Feb 19 06:24:01 crc kubenswrapper[5012]: I0219 06:24:01.026827 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v5z2x"] Feb 19 06:24:01 crc kubenswrapper[5012]: I0219 06:24:01.034536 5012 scope.go:117] "RemoveContainer" containerID="6b96ddbcd9517469ebb56cd58f965d25623177e6b664a50e79bcb5defaa92b3f" Feb 19 06:24:01 crc kubenswrapper[5012]: I0219 06:24:01.043290 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v5z2x"] Feb 19 06:24:01 crc kubenswrapper[5012]: I0219 06:24:01.072700 5012 scope.go:117] "RemoveContainer" containerID="d7e3c8bf7e3b50487cf7e0e61bc3377a127d4db21f72dced292a5d54263fa4ee" Feb 19 06:24:02 crc kubenswrapper[5012]: I0219 06:24:02.717786 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c47a2602-b592-46cf-8452-4c06ba540a3f" path="/var/lib/kubelet/pods/c47a2602-b592-46cf-8452-4c06ba540a3f/volumes" Feb 19 06:24:14 crc kubenswrapper[5012]: I0219 06:24:14.431941 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:24:14 crc kubenswrapper[5012]: I0219 06:24:14.432643 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:24:44 crc kubenswrapper[5012]: I0219 06:24:44.430761 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:24:44 crc kubenswrapper[5012]: I0219 06:24:44.431689 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:24:44 crc kubenswrapper[5012]: I0219 06:24:44.431770 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:24:44 crc kubenswrapper[5012]: I0219 06:24:44.433030 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:24:44 crc kubenswrapper[5012]: I0219 06:24:44.433136 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" gracePeriod=600 Feb 19 06:24:44 crc kubenswrapper[5012]: E0219 06:24:44.580044 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:24:45 crc kubenswrapper[5012]: I0219 06:24:45.487739 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" exitCode=0 Feb 19 06:24:45 crc kubenswrapper[5012]: I0219 06:24:45.487854 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa"} Feb 19 06:24:45 crc kubenswrapper[5012]: I0219 06:24:45.488211 5012 scope.go:117] "RemoveContainer" containerID="259a14333e76f5ec2c151bbd818fe48cadcca6e9989e78b8167dd4e34241e536" Feb 19 06:24:45 crc kubenswrapper[5012]: I0219 06:24:45.489430 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:24:45 crc kubenswrapper[5012]: E0219 06:24:45.489974 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:25:00 crc kubenswrapper[5012]: I0219 06:25:00.704184 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:25:00 crc kubenswrapper[5012]: E0219 06:25:00.705423 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:25:11 crc kubenswrapper[5012]: I0219 06:25:11.703346 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:25:11 crc kubenswrapper[5012]: E0219 06:25:11.704250 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:25:25 crc kubenswrapper[5012]: I0219 06:25:25.704161 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:25:25 crc kubenswrapper[5012]: E0219 06:25:25.705207 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:25:26 crc kubenswrapper[5012]: I0219 06:25:26.899992 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vx6fn"] Feb 19 06:25:26 crc kubenswrapper[5012]: E0219 06:25:26.900978 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerName="registry-server" Feb 19 06:25:26 crc kubenswrapper[5012]: I0219 06:25:26.901002 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerName="registry-server" Feb 19 06:25:26 crc kubenswrapper[5012]: E0219 06:25:26.901064 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerName="extract-content" Feb 19 06:25:26 crc kubenswrapper[5012]: I0219 06:25:26.901090 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerName="extract-content" Feb 19 06:25:26 crc kubenswrapper[5012]: E0219 06:25:26.901143 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerName="extract-utilities" Feb 19 06:25:26 crc kubenswrapper[5012]: I0219 06:25:26.901158 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerName="extract-utilities" Feb 19 06:25:26 crc kubenswrapper[5012]: I0219 06:25:26.901640 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="c47a2602-b592-46cf-8452-4c06ba540a3f" containerName="registry-server" Feb 19 06:25:26 crc kubenswrapper[5012]: I0219 06:25:26.905668 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:26 crc kubenswrapper[5012]: I0219 06:25:26.913241 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vx6fn"] Feb 19 06:25:26 crc kubenswrapper[5012]: I0219 06:25:26.950709 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-catalog-content\") pod \"redhat-marketplace-vx6fn\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:26 crc kubenswrapper[5012]: I0219 06:25:26.950759 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wljpn\" (UniqueName: \"kubernetes.io/projected/172af46a-ab5f-4245-8e79-7f204418aff2-kube-api-access-wljpn\") pod \"redhat-marketplace-vx6fn\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:26 crc kubenswrapper[5012]: I0219 06:25:26.950807 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-utilities\") pod \"redhat-marketplace-vx6fn\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:27 crc kubenswrapper[5012]: I0219 06:25:27.052266 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-catalog-content\") pod \"redhat-marketplace-vx6fn\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:27 crc kubenswrapper[5012]: I0219 06:25:27.052334 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wljpn\" (UniqueName: \"kubernetes.io/projected/172af46a-ab5f-4245-8e79-7f204418aff2-kube-api-access-wljpn\") pod \"redhat-marketplace-vx6fn\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:27 crc kubenswrapper[5012]: I0219 06:25:27.052383 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-utilities\") pod \"redhat-marketplace-vx6fn\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:27 crc kubenswrapper[5012]: I0219 06:25:27.052995 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-utilities\") pod \"redhat-marketplace-vx6fn\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:27 crc kubenswrapper[5012]: I0219 06:25:27.052988 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-catalog-content\") pod \"redhat-marketplace-vx6fn\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:27 crc kubenswrapper[5012]: I0219 06:25:27.076036 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wljpn\" (UniqueName: \"kubernetes.io/projected/172af46a-ab5f-4245-8e79-7f204418aff2-kube-api-access-wljpn\") pod \"redhat-marketplace-vx6fn\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:27 crc kubenswrapper[5012]: I0219 06:25:27.246113 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:27 crc kubenswrapper[5012]: I0219 06:25:27.514317 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vx6fn"] Feb 19 06:25:28 crc kubenswrapper[5012]: I0219 06:25:28.045297 5012 generic.go:334] "Generic (PLEG): container finished" podID="172af46a-ab5f-4245-8e79-7f204418aff2" containerID="fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81" exitCode=0 Feb 19 06:25:28 crc kubenswrapper[5012]: I0219 06:25:28.045419 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vx6fn" event={"ID":"172af46a-ab5f-4245-8e79-7f204418aff2","Type":"ContainerDied","Data":"fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81"} Feb 19 06:25:28 crc kubenswrapper[5012]: I0219 06:25:28.045470 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vx6fn" event={"ID":"172af46a-ab5f-4245-8e79-7f204418aff2","Type":"ContainerStarted","Data":"e9b0a187167dabdfed5b0b44744a9761128c90057eb98772f7fc509641cbcffc"} Feb 19 06:25:29 crc kubenswrapper[5012]: I0219 06:25:29.058728 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vx6fn" event={"ID":"172af46a-ab5f-4245-8e79-7f204418aff2","Type":"ContainerStarted","Data":"4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236"} Feb 19 06:25:30 crc kubenswrapper[5012]: I0219 06:25:30.073673 5012 generic.go:334] "Generic (PLEG): container finished" podID="172af46a-ab5f-4245-8e79-7f204418aff2" containerID="4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236" exitCode=0 Feb 19 06:25:30 crc kubenswrapper[5012]: I0219 06:25:30.073718 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vx6fn" event={"ID":"172af46a-ab5f-4245-8e79-7f204418aff2","Type":"ContainerDied","Data":"4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236"} Feb 19 06:25:31 crc kubenswrapper[5012]: I0219 06:25:31.087284 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vx6fn" event={"ID":"172af46a-ab5f-4245-8e79-7f204418aff2","Type":"ContainerStarted","Data":"54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0"} Feb 19 06:25:31 crc kubenswrapper[5012]: I0219 06:25:31.111548 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vx6fn" podStartSLOduration=2.711265739 podStartE2EDuration="5.111531016s" podCreationTimestamp="2026-02-19 06:25:26 +0000 UTC" firstStartedPulling="2026-02-19 06:25:28.050644176 +0000 UTC m=+3624.083966785" lastFinishedPulling="2026-02-19 06:25:30.450909453 +0000 UTC m=+3626.484232062" observedRunningTime="2026-02-19 06:25:31.106045582 +0000 UTC m=+3627.139368191" watchObservedRunningTime="2026-02-19 06:25:31.111531016 +0000 UTC m=+3627.144853585" Feb 19 06:25:37 crc kubenswrapper[5012]: I0219 06:25:37.246510 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:37 crc kubenswrapper[5012]: I0219 06:25:37.247399 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:37 crc kubenswrapper[5012]: I0219 06:25:37.335841 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:38 crc kubenswrapper[5012]: I0219 06:25:38.287559 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:38 crc kubenswrapper[5012]: I0219 06:25:38.350860 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vx6fn"] Feb 19 06:25:40 crc kubenswrapper[5012]: I0219 06:25:40.228757 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vx6fn" podUID="172af46a-ab5f-4245-8e79-7f204418aff2" containerName="registry-server" containerID="cri-o://54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0" gracePeriod=2 Feb 19 06:25:40 crc kubenswrapper[5012]: I0219 06:25:40.703627 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:25:40 crc kubenswrapper[5012]: E0219 06:25:40.704499 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:25:40 crc kubenswrapper[5012]: I0219 06:25:40.762469 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:40 crc kubenswrapper[5012]: I0219 06:25:40.925960 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-utilities\") pod \"172af46a-ab5f-4245-8e79-7f204418aff2\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " Feb 19 06:25:40 crc kubenswrapper[5012]: I0219 06:25:40.926228 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-catalog-content\") pod \"172af46a-ab5f-4245-8e79-7f204418aff2\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " Feb 19 06:25:40 crc kubenswrapper[5012]: I0219 06:25:40.926319 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wljpn\" (UniqueName: \"kubernetes.io/projected/172af46a-ab5f-4245-8e79-7f204418aff2-kube-api-access-wljpn\") pod \"172af46a-ab5f-4245-8e79-7f204418aff2\" (UID: \"172af46a-ab5f-4245-8e79-7f204418aff2\") " Feb 19 06:25:40 crc kubenswrapper[5012]: I0219 06:25:40.927491 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-utilities" (OuterVolumeSpecName: "utilities") pod "172af46a-ab5f-4245-8e79-7f204418aff2" (UID: "172af46a-ab5f-4245-8e79-7f204418aff2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:25:40 crc kubenswrapper[5012]: I0219 06:25:40.941949 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/172af46a-ab5f-4245-8e79-7f204418aff2-kube-api-access-wljpn" (OuterVolumeSpecName: "kube-api-access-wljpn") pod "172af46a-ab5f-4245-8e79-7f204418aff2" (UID: "172af46a-ab5f-4245-8e79-7f204418aff2"). InnerVolumeSpecName "kube-api-access-wljpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:25:40 crc kubenswrapper[5012]: I0219 06:25:40.963647 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "172af46a-ab5f-4245-8e79-7f204418aff2" (UID: "172af46a-ab5f-4245-8e79-7f204418aff2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.029005 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.029046 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/172af46a-ab5f-4245-8e79-7f204418aff2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.029064 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wljpn\" (UniqueName: \"kubernetes.io/projected/172af46a-ab5f-4245-8e79-7f204418aff2-kube-api-access-wljpn\") on node \"crc\" DevicePath \"\"" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.240289 5012 generic.go:334] "Generic (PLEG): container finished" podID="172af46a-ab5f-4245-8e79-7f204418aff2" containerID="54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0" exitCode=0 Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.240376 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vx6fn" event={"ID":"172af46a-ab5f-4245-8e79-7f204418aff2","Type":"ContainerDied","Data":"54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0"} Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.240416 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vx6fn" event={"ID":"172af46a-ab5f-4245-8e79-7f204418aff2","Type":"ContainerDied","Data":"e9b0a187167dabdfed5b0b44744a9761128c90057eb98772f7fc509641cbcffc"} Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.240426 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vx6fn" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.240444 5012 scope.go:117] "RemoveContainer" containerID="54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.282708 5012 scope.go:117] "RemoveContainer" containerID="4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.305396 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vx6fn"] Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.315858 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vx6fn"] Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.326216 5012 scope.go:117] "RemoveContainer" containerID="fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.369578 5012 scope.go:117] "RemoveContainer" containerID="54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0" Feb 19 06:25:41 crc kubenswrapper[5012]: E0219 06:25:41.370668 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0\": container with ID starting with 54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0 not found: ID does not exist" containerID="54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.370742 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0"} err="failed to get container status \"54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0\": rpc error: code = NotFound desc = could not find container \"54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0\": container with ID starting with 54821a469bf1780460597fa5e20f7cc517af4226307806e147a305e7a04eabe0 not found: ID does not exist" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.370783 5012 scope.go:117] "RemoveContainer" containerID="4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236" Feb 19 06:25:41 crc kubenswrapper[5012]: E0219 06:25:41.371182 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236\": container with ID starting with 4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236 not found: ID does not exist" containerID="4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.371214 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236"} err="failed to get container status \"4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236\": rpc error: code = NotFound desc = could not find container \"4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236\": container with ID starting with 4d0ccdbaae2512b086e70cc8552dec13ec1fd59bfa1a9550bca31fc435da0236 not found: ID does not exist" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.371242 5012 scope.go:117] "RemoveContainer" containerID="fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81" Feb 19 06:25:41 crc kubenswrapper[5012]: E0219 06:25:41.371706 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81\": container with ID starting with fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81 not found: ID does not exist" containerID="fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81" Feb 19 06:25:41 crc kubenswrapper[5012]: I0219 06:25:41.371755 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81"} err="failed to get container status \"fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81\": rpc error: code = NotFound desc = could not find container \"fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81\": container with ID starting with fff5951b7a0e806e87ca3340ae3f36f32d645263ecb2f83d20e32c20ce4e7c81 not found: ID does not exist" Feb 19 06:25:42 crc kubenswrapper[5012]: I0219 06:25:42.760811 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="172af46a-ab5f-4245-8e79-7f204418aff2" path="/var/lib/kubelet/pods/172af46a-ab5f-4245-8e79-7f204418aff2/volumes" Feb 19 06:25:51 crc kubenswrapper[5012]: I0219 06:25:51.703952 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:25:51 crc kubenswrapper[5012]: E0219 06:25:51.705368 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:26:04 crc kubenswrapper[5012]: I0219 06:26:04.703862 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:26:04 crc kubenswrapper[5012]: E0219 06:26:04.705899 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:26:17 crc kubenswrapper[5012]: I0219 06:26:17.704871 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:26:17 crc kubenswrapper[5012]: E0219 06:26:17.705938 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:26:32 crc kubenswrapper[5012]: I0219 06:26:32.703886 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:26:32 crc kubenswrapper[5012]: E0219 06:26:32.705646 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:26:43 crc kubenswrapper[5012]: I0219 06:26:43.703756 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:26:43 crc kubenswrapper[5012]: E0219 06:26:43.704847 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:26:54 crc kubenswrapper[5012]: I0219 06:26:54.715604 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:26:54 crc kubenswrapper[5012]: E0219 06:26:54.716281 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:27:05 crc kubenswrapper[5012]: I0219 06:27:05.703673 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:27:05 crc kubenswrapper[5012]: E0219 06:27:05.705155 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:27:18 crc kubenswrapper[5012]: I0219 06:27:18.703613 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:27:18 crc kubenswrapper[5012]: E0219 06:27:18.704269 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:27:33 crc kubenswrapper[5012]: I0219 06:27:33.703268 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:27:33 crc kubenswrapper[5012]: E0219 06:27:33.704556 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:27:45 crc kubenswrapper[5012]: I0219 06:27:45.703624 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:27:45 crc kubenswrapper[5012]: E0219 06:27:45.704717 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:28:00 crc kubenswrapper[5012]: I0219 06:28:00.709688 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:28:00 crc kubenswrapper[5012]: E0219 06:28:00.710855 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:28:03 crc kubenswrapper[5012]: I0219 06:28:03.726287 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="1fd0c672-e258-4feb-8bbd-26135f92f7fb" containerName="galera" probeResult="failure" output="command timed out" Feb 19 06:28:03 crc kubenswrapper[5012]: I0219 06:28:03.726329 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="1fd0c672-e258-4feb-8bbd-26135f92f7fb" containerName="galera" probeResult="failure" output="command timed out" Feb 19 06:28:13 crc kubenswrapper[5012]: I0219 06:28:13.703796 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:28:13 crc kubenswrapper[5012]: E0219 06:28:13.708371 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:28:25 crc kubenswrapper[5012]: I0219 06:28:25.703212 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:28:25 crc kubenswrapper[5012]: E0219 06:28:25.704376 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:28:36 crc kubenswrapper[5012]: I0219 06:28:36.703404 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:28:36 crc kubenswrapper[5012]: E0219 06:28:36.704597 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.488131 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pxf2x"] Feb 19 06:28:47 crc kubenswrapper[5012]: E0219 06:28:47.489239 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="172af46a-ab5f-4245-8e79-7f204418aff2" containerName="extract-utilities" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.489255 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="172af46a-ab5f-4245-8e79-7f204418aff2" containerName="extract-utilities" Feb 19 06:28:47 crc kubenswrapper[5012]: E0219 06:28:47.489286 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="172af46a-ab5f-4245-8e79-7f204418aff2" containerName="registry-server" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.489294 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="172af46a-ab5f-4245-8e79-7f204418aff2" containerName="registry-server" Feb 19 06:28:47 crc kubenswrapper[5012]: E0219 06:28:47.489579 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="172af46a-ab5f-4245-8e79-7f204418aff2" containerName="extract-content" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.489590 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="172af46a-ab5f-4245-8e79-7f204418aff2" containerName="extract-content" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.489859 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="172af46a-ab5f-4245-8e79-7f204418aff2" containerName="registry-server" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.491646 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.508567 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pxf2x"] Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.584705 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg6d6\" (UniqueName: \"kubernetes.io/projected/4d86775d-0772-4adf-9ed9-c7b3016d97e7-kube-api-access-xg6d6\") pod \"community-operators-pxf2x\" (UID: \"4d86775d-0772-4adf-9ed9-c7b3016d97e7\") " pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.584911 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d86775d-0772-4adf-9ed9-c7b3016d97e7-utilities\") pod \"community-operators-pxf2x\" (UID: \"4d86775d-0772-4adf-9ed9-c7b3016d97e7\") " pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.584981 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d86775d-0772-4adf-9ed9-c7b3016d97e7-catalog-content\") pod \"community-operators-pxf2x\" (UID: \"4d86775d-0772-4adf-9ed9-c7b3016d97e7\") " pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.687552 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d86775d-0772-4adf-9ed9-c7b3016d97e7-utilities\") pod \"community-operators-pxf2x\" (UID: \"4d86775d-0772-4adf-9ed9-c7b3016d97e7\") " pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.687659 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d86775d-0772-4adf-9ed9-c7b3016d97e7-catalog-content\") pod \"community-operators-pxf2x\" (UID: \"4d86775d-0772-4adf-9ed9-c7b3016d97e7\") " pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.687715 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg6d6\" (UniqueName: \"kubernetes.io/projected/4d86775d-0772-4adf-9ed9-c7b3016d97e7-kube-api-access-xg6d6\") pod \"community-operators-pxf2x\" (UID: \"4d86775d-0772-4adf-9ed9-c7b3016d97e7\") " pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.688158 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d86775d-0772-4adf-9ed9-c7b3016d97e7-utilities\") pod \"community-operators-pxf2x\" (UID: \"4d86775d-0772-4adf-9ed9-c7b3016d97e7\") " pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.688223 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d86775d-0772-4adf-9ed9-c7b3016d97e7-catalog-content\") pod \"community-operators-pxf2x\" (UID: \"4d86775d-0772-4adf-9ed9-c7b3016d97e7\") " pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.715980 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg6d6\" (UniqueName: \"kubernetes.io/projected/4d86775d-0772-4adf-9ed9-c7b3016d97e7-kube-api-access-xg6d6\") pod \"community-operators-pxf2x\" (UID: \"4d86775d-0772-4adf-9ed9-c7b3016d97e7\") " pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:47 crc kubenswrapper[5012]: I0219 06:28:47.819232 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:48 crc kubenswrapper[5012]: I0219 06:28:48.358764 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pxf2x"] Feb 19 06:28:49 crc kubenswrapper[5012]: I0219 06:28:49.600889 5012 generic.go:334] "Generic (PLEG): container finished" podID="4d86775d-0772-4adf-9ed9-c7b3016d97e7" containerID="2e49cc5d1f19e547f3c40e84406ad299664d5c74bcbb965ace63f18eab5b6c2e" exitCode=0 Feb 19 06:28:49 crc kubenswrapper[5012]: I0219 06:28:49.601685 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxf2x" event={"ID":"4d86775d-0772-4adf-9ed9-c7b3016d97e7","Type":"ContainerDied","Data":"2e49cc5d1f19e547f3c40e84406ad299664d5c74bcbb965ace63f18eab5b6c2e"} Feb 19 06:28:49 crc kubenswrapper[5012]: I0219 06:28:49.601732 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxf2x" event={"ID":"4d86775d-0772-4adf-9ed9-c7b3016d97e7","Type":"ContainerStarted","Data":"7ac581084802c7d8322972f7f930fc88583a08840a6726fe7aee7a04889bb890"} Feb 19 06:28:49 crc kubenswrapper[5012]: I0219 06:28:49.605014 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 06:28:50 crc kubenswrapper[5012]: I0219 06:28:50.703556 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:28:50 crc kubenswrapper[5012]: E0219 06:28:50.705101 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:28:54 crc kubenswrapper[5012]: I0219 06:28:54.658083 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxf2x" event={"ID":"4d86775d-0772-4adf-9ed9-c7b3016d97e7","Type":"ContainerStarted","Data":"26f630cfb6ba7ceb8e1ec51f3107849c4ee537f733203b7bcb98c45bf30728fa"} Feb 19 06:28:55 crc kubenswrapper[5012]: I0219 06:28:55.672895 5012 generic.go:334] "Generic (PLEG): container finished" podID="4d86775d-0772-4adf-9ed9-c7b3016d97e7" containerID="26f630cfb6ba7ceb8e1ec51f3107849c4ee537f733203b7bcb98c45bf30728fa" exitCode=0 Feb 19 06:28:55 crc kubenswrapper[5012]: I0219 06:28:55.673159 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxf2x" event={"ID":"4d86775d-0772-4adf-9ed9-c7b3016d97e7","Type":"ContainerDied","Data":"26f630cfb6ba7ceb8e1ec51f3107849c4ee537f733203b7bcb98c45bf30728fa"} Feb 19 06:28:56 crc kubenswrapper[5012]: I0219 06:28:56.690085 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxf2x" event={"ID":"4d86775d-0772-4adf-9ed9-c7b3016d97e7","Type":"ContainerStarted","Data":"83897394254a09c8d3f5ccb409151529e06a1388aec8850dc52a000891f989d3"} Feb 19 06:28:56 crc kubenswrapper[5012]: I0219 06:28:56.718765 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pxf2x" podStartSLOduration=3.238273465 podStartE2EDuration="9.718748689s" podCreationTimestamp="2026-02-19 06:28:47 +0000 UTC" firstStartedPulling="2026-02-19 06:28:49.604630471 +0000 UTC m=+3825.637953070" lastFinishedPulling="2026-02-19 06:28:56.085105715 +0000 UTC m=+3832.118428294" observedRunningTime="2026-02-19 06:28:56.718441441 +0000 UTC m=+3832.751764050" watchObservedRunningTime="2026-02-19 06:28:56.718748689 +0000 UTC m=+3832.752071268" Feb 19 06:28:57 crc kubenswrapper[5012]: I0219 06:28:57.820126 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:57 crc kubenswrapper[5012]: I0219 06:28:57.820469 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:28:58 crc kubenswrapper[5012]: I0219 06:28:58.909537 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pxf2x" podUID="4d86775d-0772-4adf-9ed9-c7b3016d97e7" containerName="registry-server" probeResult="failure" output=< Feb 19 06:28:58 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 06:28:58 crc kubenswrapper[5012]: > Feb 19 06:29:05 crc kubenswrapper[5012]: I0219 06:29:05.703384 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:29:05 crc kubenswrapper[5012]: E0219 06:29:05.704228 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:29:07 crc kubenswrapper[5012]: I0219 06:29:07.889888 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:29:07 crc kubenswrapper[5012]: I0219 06:29:07.951331 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pxf2x" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.062107 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pxf2x"] Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.154955 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bj5sc"] Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.155327 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bj5sc" podUID="b03ab861-19bb-4215-9b19-990a14b35367" containerName="registry-server" containerID="cri-o://ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80" gracePeriod=2 Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.665354 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bj5sc" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.794356 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-utilities\") pod \"b03ab861-19bb-4215-9b19-990a14b35367\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.794492 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqbpk\" (UniqueName: \"kubernetes.io/projected/b03ab861-19bb-4215-9b19-990a14b35367-kube-api-access-lqbpk\") pod \"b03ab861-19bb-4215-9b19-990a14b35367\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.794564 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-catalog-content\") pod \"b03ab861-19bb-4215-9b19-990a14b35367\" (UID: \"b03ab861-19bb-4215-9b19-990a14b35367\") " Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.795163 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-utilities" (OuterVolumeSpecName: "utilities") pod "b03ab861-19bb-4215-9b19-990a14b35367" (UID: "b03ab861-19bb-4215-9b19-990a14b35367"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.808587 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b03ab861-19bb-4215-9b19-990a14b35367-kube-api-access-lqbpk" (OuterVolumeSpecName: "kube-api-access-lqbpk") pod "b03ab861-19bb-4215-9b19-990a14b35367" (UID: "b03ab861-19bb-4215-9b19-990a14b35367"). InnerVolumeSpecName "kube-api-access-lqbpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.829081 5012 generic.go:334] "Generic (PLEG): container finished" podID="b03ab861-19bb-4215-9b19-990a14b35367" containerID="ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80" exitCode=0 Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.829149 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bj5sc" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.829148 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj5sc" event={"ID":"b03ab861-19bb-4215-9b19-990a14b35367","Type":"ContainerDied","Data":"ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80"} Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.829450 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bj5sc" event={"ID":"b03ab861-19bb-4215-9b19-990a14b35367","Type":"ContainerDied","Data":"b3f8ca73c66c4fd97d0f19be0a24c8b8a95a41c1d3401dfb594c1ddc1a916e29"} Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.829472 5012 scope.go:117] "RemoveContainer" containerID="ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.868248 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b03ab861-19bb-4215-9b19-990a14b35367" (UID: "b03ab861-19bb-4215-9b19-990a14b35367"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.871651 5012 scope.go:117] "RemoveContainer" containerID="8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.888949 5012 scope.go:117] "RemoveContainer" containerID="8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.896650 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.896669 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b03ab861-19bb-4215-9b19-990a14b35367-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.896695 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqbpk\" (UniqueName: \"kubernetes.io/projected/b03ab861-19bb-4215-9b19-990a14b35367-kube-api-access-lqbpk\") on node \"crc\" DevicePath \"\"" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.934825 5012 scope.go:117] "RemoveContainer" containerID="ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80" Feb 19 06:29:08 crc kubenswrapper[5012]: E0219 06:29:08.935215 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80\": container with ID starting with ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80 not found: ID does not exist" containerID="ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.935247 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80"} err="failed to get container status \"ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80\": rpc error: code = NotFound desc = could not find container \"ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80\": container with ID starting with ebe38592b8ae4079f585b3519102fa9309861e9c37959946a63dc90461b3cc80 not found: ID does not exist" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.935268 5012 scope.go:117] "RemoveContainer" containerID="8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f" Feb 19 06:29:08 crc kubenswrapper[5012]: E0219 06:29:08.935472 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f\": container with ID starting with 8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f not found: ID does not exist" containerID="8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.935496 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f"} err="failed to get container status \"8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f\": rpc error: code = NotFound desc = could not find container \"8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f\": container with ID starting with 8f9b569c3c2e67e5f3c558830947499f434c017a9e36750ee0f9d2e66c5dbc3f not found: ID does not exist" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.935508 5012 scope.go:117] "RemoveContainer" containerID="8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b" Feb 19 06:29:08 crc kubenswrapper[5012]: E0219 06:29:08.935688 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b\": container with ID starting with 8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b not found: ID does not exist" containerID="8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b" Feb 19 06:29:08 crc kubenswrapper[5012]: I0219 06:29:08.935708 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b"} err="failed to get container status \"8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b\": rpc error: code = NotFound desc = could not find container \"8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b\": container with ID starting with 8f9716ee78fdc06734bcad1916e97cfabddcc3b7600529571489f7c1e96e8d9b not found: ID does not exist" Feb 19 06:29:09 crc kubenswrapper[5012]: I0219 06:29:09.185259 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bj5sc"] Feb 19 06:29:09 crc kubenswrapper[5012]: I0219 06:29:09.196036 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bj5sc"] Feb 19 06:29:10 crc kubenswrapper[5012]: I0219 06:29:10.723996 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b03ab861-19bb-4215-9b19-990a14b35367" path="/var/lib/kubelet/pods/b03ab861-19bb-4215-9b19-990a14b35367/volumes" Feb 19 06:29:19 crc kubenswrapper[5012]: I0219 06:29:19.703974 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:29:19 crc kubenswrapper[5012]: E0219 06:29:19.705146 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:29:32 crc kubenswrapper[5012]: I0219 06:29:32.703183 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:29:32 crc kubenswrapper[5012]: E0219 06:29:32.704066 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:29:44 crc kubenswrapper[5012]: I0219 06:29:44.716371 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:29:45 crc kubenswrapper[5012]: I0219 06:29:45.262095 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"a8624682a1b0fe5d91d96534d39753294d14b7d998fb6da563d9b8a2dee2a6b7"} Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.206388 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674"] Feb 19 06:30:00 crc kubenswrapper[5012]: E0219 06:30:00.207731 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03ab861-19bb-4215-9b19-990a14b35367" containerName="registry-server" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.207763 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03ab861-19bb-4215-9b19-990a14b35367" containerName="registry-server" Feb 19 06:30:00 crc kubenswrapper[5012]: E0219 06:30:00.207777 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03ab861-19bb-4215-9b19-990a14b35367" containerName="extract-utilities" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.207788 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03ab861-19bb-4215-9b19-990a14b35367" containerName="extract-utilities" Feb 19 06:30:00 crc kubenswrapper[5012]: E0219 06:30:00.207812 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03ab861-19bb-4215-9b19-990a14b35367" containerName="extract-content" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.207827 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03ab861-19bb-4215-9b19-990a14b35367" containerName="extract-content" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.208118 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="b03ab861-19bb-4215-9b19-990a14b35367" containerName="registry-server" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.209095 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.214190 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.221919 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674"] Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.222045 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.388556 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f4472c9-3299-45cf-95d4-af341606fb58-config-volume\") pod \"collect-profiles-29524710-fb674\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.389536 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mch7d\" (UniqueName: \"kubernetes.io/projected/4f4472c9-3299-45cf-95d4-af341606fb58-kube-api-access-mch7d\") pod \"collect-profiles-29524710-fb674\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.389886 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f4472c9-3299-45cf-95d4-af341606fb58-secret-volume\") pod \"collect-profiles-29524710-fb674\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.491525 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f4472c9-3299-45cf-95d4-af341606fb58-config-volume\") pod \"collect-profiles-29524710-fb674\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.491605 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mch7d\" (UniqueName: \"kubernetes.io/projected/4f4472c9-3299-45cf-95d4-af341606fb58-kube-api-access-mch7d\") pod \"collect-profiles-29524710-fb674\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.491686 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f4472c9-3299-45cf-95d4-af341606fb58-secret-volume\") pod \"collect-profiles-29524710-fb674\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.493186 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f4472c9-3299-45cf-95d4-af341606fb58-config-volume\") pod \"collect-profiles-29524710-fb674\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.504862 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f4472c9-3299-45cf-95d4-af341606fb58-secret-volume\") pod \"collect-profiles-29524710-fb674\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.523926 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mch7d\" (UniqueName: \"kubernetes.io/projected/4f4472c9-3299-45cf-95d4-af341606fb58-kube-api-access-mch7d\") pod \"collect-profiles-29524710-fb674\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:00 crc kubenswrapper[5012]: I0219 06:30:00.531051 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:01 crc kubenswrapper[5012]: I0219 06:30:01.096117 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674"] Feb 19 06:30:01 crc kubenswrapper[5012]: W0219 06:30:01.099334 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f4472c9_3299_45cf_95d4_af341606fb58.slice/crio-5cf05c421ed982916fe3ef7f308c4cebdd9734238e78448b33292010c065a9e2 WatchSource:0}: Error finding container 5cf05c421ed982916fe3ef7f308c4cebdd9734238e78448b33292010c065a9e2: Status 404 returned error can't find the container with id 5cf05c421ed982916fe3ef7f308c4cebdd9734238e78448b33292010c065a9e2 Feb 19 06:30:01 crc kubenswrapper[5012]: I0219 06:30:01.422382 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" event={"ID":"4f4472c9-3299-45cf-95d4-af341606fb58","Type":"ContainerStarted","Data":"c006cdccaac79fb8cfe4dc746d53b358029e4ebb06f6b1804024382f3aa49800"} Feb 19 06:30:01 crc kubenswrapper[5012]: I0219 06:30:01.422860 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" event={"ID":"4f4472c9-3299-45cf-95d4-af341606fb58","Type":"ContainerStarted","Data":"5cf05c421ed982916fe3ef7f308c4cebdd9734238e78448b33292010c065a9e2"} Feb 19 06:30:01 crc kubenswrapper[5012]: I0219 06:30:01.447561 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" podStartSLOduration=1.447535971 podStartE2EDuration="1.447535971s" podCreationTimestamp="2026-02-19 06:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 06:30:01.440302744 +0000 UTC m=+3897.473625313" watchObservedRunningTime="2026-02-19 06:30:01.447535971 +0000 UTC m=+3897.480858550" Feb 19 06:30:02 crc kubenswrapper[5012]: I0219 06:30:02.445206 5012 generic.go:334] "Generic (PLEG): container finished" podID="4f4472c9-3299-45cf-95d4-af341606fb58" containerID="c006cdccaac79fb8cfe4dc746d53b358029e4ebb06f6b1804024382f3aa49800" exitCode=0 Feb 19 06:30:02 crc kubenswrapper[5012]: I0219 06:30:02.445373 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" event={"ID":"4f4472c9-3299-45cf-95d4-af341606fb58","Type":"ContainerDied","Data":"c006cdccaac79fb8cfe4dc746d53b358029e4ebb06f6b1804024382f3aa49800"} Feb 19 06:30:03 crc kubenswrapper[5012]: I0219 06:30:03.943558 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.093170 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f4472c9-3299-45cf-95d4-af341606fb58-secret-volume\") pod \"4f4472c9-3299-45cf-95d4-af341606fb58\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.093242 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f4472c9-3299-45cf-95d4-af341606fb58-config-volume\") pod \"4f4472c9-3299-45cf-95d4-af341606fb58\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.093275 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mch7d\" (UniqueName: \"kubernetes.io/projected/4f4472c9-3299-45cf-95d4-af341606fb58-kube-api-access-mch7d\") pod \"4f4472c9-3299-45cf-95d4-af341606fb58\" (UID: \"4f4472c9-3299-45cf-95d4-af341606fb58\") " Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.094670 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f4472c9-3299-45cf-95d4-af341606fb58-config-volume" (OuterVolumeSpecName: "config-volume") pod "4f4472c9-3299-45cf-95d4-af341606fb58" (UID: "4f4472c9-3299-45cf-95d4-af341606fb58"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.100040 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f4472c9-3299-45cf-95d4-af341606fb58-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4f4472c9-3299-45cf-95d4-af341606fb58" (UID: "4f4472c9-3299-45cf-95d4-af341606fb58"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.102519 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f4472c9-3299-45cf-95d4-af341606fb58-kube-api-access-mch7d" (OuterVolumeSpecName: "kube-api-access-mch7d") pod "4f4472c9-3299-45cf-95d4-af341606fb58" (UID: "4f4472c9-3299-45cf-95d4-af341606fb58"). InnerVolumeSpecName "kube-api-access-mch7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.196670 5012 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4f4472c9-3299-45cf-95d4-af341606fb58-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.196711 5012 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f4472c9-3299-45cf-95d4-af341606fb58-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.196723 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mch7d\" (UniqueName: \"kubernetes.io/projected/4f4472c9-3299-45cf-95d4-af341606fb58-kube-api-access-mch7d\") on node \"crc\" DevicePath \"\"" Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.475289 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" event={"ID":"4f4472c9-3299-45cf-95d4-af341606fb58","Type":"ContainerDied","Data":"5cf05c421ed982916fe3ef7f308c4cebdd9734238e78448b33292010c065a9e2"} Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.475341 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cf05c421ed982916fe3ef7f308c4cebdd9734238e78448b33292010c065a9e2" Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.475456 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524710-fb674" Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.550686 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v"] Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.559956 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524665-pjx7v"] Feb 19 06:30:04 crc kubenswrapper[5012]: I0219 06:30:04.729947 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46070367-1765-4a70-b997-58b87ee1fbf1" path="/var/lib/kubelet/pods/46070367-1765-4a70-b997-58b87ee1fbf1/volumes" Feb 19 06:30:38 crc kubenswrapper[5012]: I0219 06:30:38.926638 5012 scope.go:117] "RemoveContainer" containerID="ef8d233d5ce4a4673c65e084ba6deb20a57df07604ba44e351882efa60733381" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.316081 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lllnk"] Feb 19 06:30:53 crc kubenswrapper[5012]: E0219 06:30:53.317392 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f4472c9-3299-45cf-95d4-af341606fb58" containerName="collect-profiles" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.317408 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f4472c9-3299-45cf-95d4-af341606fb58" containerName="collect-profiles" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.317741 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f4472c9-3299-45cf-95d4-af341606fb58" containerName="collect-profiles" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.319717 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.333489 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lllnk"] Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.448731 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzvz6\" (UniqueName: \"kubernetes.io/projected/f3768d99-6ea2-494b-bce6-a469804e6f6f-kube-api-access-dzvz6\") pod \"redhat-operators-lllnk\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.448918 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-utilities\") pod \"redhat-operators-lllnk\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.448966 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-catalog-content\") pod \"redhat-operators-lllnk\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.550800 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzvz6\" (UniqueName: \"kubernetes.io/projected/f3768d99-6ea2-494b-bce6-a469804e6f6f-kube-api-access-dzvz6\") pod \"redhat-operators-lllnk\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.550947 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-utilities\") pod \"redhat-operators-lllnk\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.550981 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-catalog-content\") pod \"redhat-operators-lllnk\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.551542 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-utilities\") pod \"redhat-operators-lllnk\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.551636 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-catalog-content\") pod \"redhat-operators-lllnk\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.572211 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzvz6\" (UniqueName: \"kubernetes.io/projected/f3768d99-6ea2-494b-bce6-a469804e6f6f-kube-api-access-dzvz6\") pod \"redhat-operators-lllnk\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:53 crc kubenswrapper[5012]: I0219 06:30:53.651576 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:30:54 crc kubenswrapper[5012]: I0219 06:30:54.223213 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lllnk"] Feb 19 06:30:54 crc kubenswrapper[5012]: W0219 06:30:54.237809 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3768d99_6ea2_494b_bce6_a469804e6f6f.slice/crio-a06705793d2a359ee88ce0ad748070766679d29c188e121f2bda9010943558e3 WatchSource:0}: Error finding container a06705793d2a359ee88ce0ad748070766679d29c188e121f2bda9010943558e3: Status 404 returned error can't find the container with id a06705793d2a359ee88ce0ad748070766679d29c188e121f2bda9010943558e3 Feb 19 06:30:55 crc kubenswrapper[5012]: I0219 06:30:55.070089 5012 generic.go:334] "Generic (PLEG): container finished" podID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerID="1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0" exitCode=0 Feb 19 06:30:55 crc kubenswrapper[5012]: I0219 06:30:55.070469 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lllnk" event={"ID":"f3768d99-6ea2-494b-bce6-a469804e6f6f","Type":"ContainerDied","Data":"1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0"} Feb 19 06:30:55 crc kubenswrapper[5012]: I0219 06:30:55.070516 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lllnk" event={"ID":"f3768d99-6ea2-494b-bce6-a469804e6f6f","Type":"ContainerStarted","Data":"a06705793d2a359ee88ce0ad748070766679d29c188e121f2bda9010943558e3"} Feb 19 06:30:57 crc kubenswrapper[5012]: I0219 06:30:57.100109 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lllnk" event={"ID":"f3768d99-6ea2-494b-bce6-a469804e6f6f","Type":"ContainerStarted","Data":"9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33"} Feb 19 06:31:01 crc kubenswrapper[5012]: I0219 06:31:01.141905 5012 generic.go:334] "Generic (PLEG): container finished" podID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerID="9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33" exitCode=0 Feb 19 06:31:01 crc kubenswrapper[5012]: I0219 06:31:01.142005 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lllnk" event={"ID":"f3768d99-6ea2-494b-bce6-a469804e6f6f","Type":"ContainerDied","Data":"9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33"} Feb 19 06:31:02 crc kubenswrapper[5012]: I0219 06:31:02.153181 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lllnk" event={"ID":"f3768d99-6ea2-494b-bce6-a469804e6f6f","Type":"ContainerStarted","Data":"e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91"} Feb 19 06:31:02 crc kubenswrapper[5012]: I0219 06:31:02.171838 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lllnk" podStartSLOduration=2.529620654 podStartE2EDuration="9.171811505s" podCreationTimestamp="2026-02-19 06:30:53 +0000 UTC" firstStartedPulling="2026-02-19 06:30:55.121189256 +0000 UTC m=+3951.154511825" lastFinishedPulling="2026-02-19 06:31:01.763380067 +0000 UTC m=+3957.796702676" observedRunningTime="2026-02-19 06:31:02.169237112 +0000 UTC m=+3958.202559751" watchObservedRunningTime="2026-02-19 06:31:02.171811505 +0000 UTC m=+3958.205134114" Feb 19 06:31:03 crc kubenswrapper[5012]: I0219 06:31:03.653275 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:31:03 crc kubenswrapper[5012]: I0219 06:31:03.655024 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:31:04 crc kubenswrapper[5012]: I0219 06:31:04.729792 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lllnk" podUID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerName="registry-server" probeResult="failure" output=< Feb 19 06:31:04 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 06:31:04 crc kubenswrapper[5012]: > Feb 19 06:31:13 crc kubenswrapper[5012]: I0219 06:31:13.735474 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:31:13 crc kubenswrapper[5012]: I0219 06:31:13.807073 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:31:13 crc kubenswrapper[5012]: I0219 06:31:13.976391 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lllnk"] Feb 19 06:31:15 crc kubenswrapper[5012]: I0219 06:31:15.296471 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lllnk" podUID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerName="registry-server" containerID="cri-o://e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91" gracePeriod=2 Feb 19 06:31:15 crc kubenswrapper[5012]: I0219 06:31:15.815400 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:31:15 crc kubenswrapper[5012]: I0219 06:31:15.911862 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-catalog-content\") pod \"f3768d99-6ea2-494b-bce6-a469804e6f6f\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " Feb 19 06:31:15 crc kubenswrapper[5012]: I0219 06:31:15.912060 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-utilities\") pod \"f3768d99-6ea2-494b-bce6-a469804e6f6f\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " Feb 19 06:31:15 crc kubenswrapper[5012]: I0219 06:31:15.912221 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzvz6\" (UniqueName: \"kubernetes.io/projected/f3768d99-6ea2-494b-bce6-a469804e6f6f-kube-api-access-dzvz6\") pod \"f3768d99-6ea2-494b-bce6-a469804e6f6f\" (UID: \"f3768d99-6ea2-494b-bce6-a469804e6f6f\") " Feb 19 06:31:15 crc kubenswrapper[5012]: I0219 06:31:15.913801 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-utilities" (OuterVolumeSpecName: "utilities") pod "f3768d99-6ea2-494b-bce6-a469804e6f6f" (UID: "f3768d99-6ea2-494b-bce6-a469804e6f6f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:31:15 crc kubenswrapper[5012]: I0219 06:31:15.919333 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3768d99-6ea2-494b-bce6-a469804e6f6f-kube-api-access-dzvz6" (OuterVolumeSpecName: "kube-api-access-dzvz6") pod "f3768d99-6ea2-494b-bce6-a469804e6f6f" (UID: "f3768d99-6ea2-494b-bce6-a469804e6f6f"). InnerVolumeSpecName "kube-api-access-dzvz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.014513 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.014542 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzvz6\" (UniqueName: \"kubernetes.io/projected/f3768d99-6ea2-494b-bce6-a469804e6f6f-kube-api-access-dzvz6\") on node \"crc\" DevicePath \"\"" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.067807 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3768d99-6ea2-494b-bce6-a469804e6f6f" (UID: "f3768d99-6ea2-494b-bce6-a469804e6f6f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.116595 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3768d99-6ea2-494b-bce6-a469804e6f6f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.309404 5012 generic.go:334] "Generic (PLEG): container finished" podID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerID="e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91" exitCode=0 Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.309456 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lllnk" event={"ID":"f3768d99-6ea2-494b-bce6-a469804e6f6f","Type":"ContainerDied","Data":"e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91"} Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.309488 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lllnk" event={"ID":"f3768d99-6ea2-494b-bce6-a469804e6f6f","Type":"ContainerDied","Data":"a06705793d2a359ee88ce0ad748070766679d29c188e121f2bda9010943558e3"} Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.309526 5012 scope.go:117] "RemoveContainer" containerID="e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.309527 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lllnk" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.400629 5012 scope.go:117] "RemoveContainer" containerID="9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.403919 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lllnk"] Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.413207 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lllnk"] Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.432228 5012 scope.go:117] "RemoveContainer" containerID="1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.529693 5012 scope.go:117] "RemoveContainer" containerID="e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91" Feb 19 06:31:16 crc kubenswrapper[5012]: E0219 06:31:16.530134 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91\": container with ID starting with e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91 not found: ID does not exist" containerID="e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.530180 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91"} err="failed to get container status \"e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91\": rpc error: code = NotFound desc = could not find container \"e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91\": container with ID starting with e8a8076466200b8c3d7d1492437eaf30ac4a495d4a63c8e7d413ea8ae3a2df91 not found: ID does not exist" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.530202 5012 scope.go:117] "RemoveContainer" containerID="9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33" Feb 19 06:31:16 crc kubenswrapper[5012]: E0219 06:31:16.530749 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33\": container with ID starting with 9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33 not found: ID does not exist" containerID="9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.530809 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33"} err="failed to get container status \"9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33\": rpc error: code = NotFound desc = could not find container \"9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33\": container with ID starting with 9d3fb1baa5653429b567f61fa7d7928d682d33822f963a5aa0847c6a1c3cca33 not found: ID does not exist" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.530836 5012 scope.go:117] "RemoveContainer" containerID="1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0" Feb 19 06:31:16 crc kubenswrapper[5012]: E0219 06:31:16.531164 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0\": container with ID starting with 1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0 not found: ID does not exist" containerID="1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.531206 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0"} err="failed to get container status \"1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0\": rpc error: code = NotFound desc = could not find container \"1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0\": container with ID starting with 1d4b08d1b82f1a048f2eb2b1721a7f1da43c47c0a235c039867d7e43581529c0 not found: ID does not exist" Feb 19 06:31:16 crc kubenswrapper[5012]: I0219 06:31:16.713551 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3768d99-6ea2-494b-bce6-a469804e6f6f" path="/var/lib/kubelet/pods/f3768d99-6ea2-494b-bce6-a469804e6f6f/volumes" Feb 19 06:31:44 crc kubenswrapper[5012]: I0219 06:31:44.431003 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:31:44 crc kubenswrapper[5012]: I0219 06:31:44.431694 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:32:14 crc kubenswrapper[5012]: I0219 06:32:14.430902 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:32:14 crc kubenswrapper[5012]: I0219 06:32:14.431706 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:32:44 crc kubenswrapper[5012]: I0219 06:32:44.431241 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:32:44 crc kubenswrapper[5012]: I0219 06:32:44.432001 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:32:44 crc kubenswrapper[5012]: I0219 06:32:44.432073 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:32:44 crc kubenswrapper[5012]: I0219 06:32:44.433253 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a8624682a1b0fe5d91d96534d39753294d14b7d998fb6da563d9b8a2dee2a6b7"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:32:44 crc kubenswrapper[5012]: I0219 06:32:44.433383 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://a8624682a1b0fe5d91d96534d39753294d14b7d998fb6da563d9b8a2dee2a6b7" gracePeriod=600 Feb 19 06:32:45 crc kubenswrapper[5012]: I0219 06:32:45.351087 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="a8624682a1b0fe5d91d96534d39753294d14b7d998fb6da563d9b8a2dee2a6b7" exitCode=0 Feb 19 06:32:45 crc kubenswrapper[5012]: I0219 06:32:45.351186 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"a8624682a1b0fe5d91d96534d39753294d14b7d998fb6da563d9b8a2dee2a6b7"} Feb 19 06:32:45 crc kubenswrapper[5012]: I0219 06:32:45.351659 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0"} Feb 19 06:32:45 crc kubenswrapper[5012]: I0219 06:32:45.351688 5012 scope.go:117] "RemoveContainer" containerID="379f8bab3782c15cd43e2d7b0504b94858995aafc3eaae361dc1bb00d442c6fa" Feb 19 06:34:44 crc kubenswrapper[5012]: I0219 06:34:44.431067 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:34:44 crc kubenswrapper[5012]: I0219 06:34:44.432042 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.333886 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x5n5b"] Feb 19 06:34:52 crc kubenswrapper[5012]: E0219 06:34:52.335502 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerName="registry-server" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.335528 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerName="registry-server" Feb 19 06:34:52 crc kubenswrapper[5012]: E0219 06:34:52.335555 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerName="extract-content" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.335569 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerName="extract-content" Feb 19 06:34:52 crc kubenswrapper[5012]: E0219 06:34:52.335622 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerName="extract-utilities" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.335640 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerName="extract-utilities" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.336021 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3768d99-6ea2-494b-bce6-a469804e6f6f" containerName="registry-server" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.338730 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.362652 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x5n5b"] Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.420319 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-utilities\") pod \"certified-operators-x5n5b\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.420421 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-catalog-content\") pod \"certified-operators-x5n5b\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.420608 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g8mb\" (UniqueName: \"kubernetes.io/projected/dda56b59-9803-4c72-8018-cf68bb4c543c-kube-api-access-2g8mb\") pod \"certified-operators-x5n5b\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.522873 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-utilities\") pod \"certified-operators-x5n5b\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.522947 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-catalog-content\") pod \"certified-operators-x5n5b\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.523075 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g8mb\" (UniqueName: \"kubernetes.io/projected/dda56b59-9803-4c72-8018-cf68bb4c543c-kube-api-access-2g8mb\") pod \"certified-operators-x5n5b\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.523577 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-utilities\") pod \"certified-operators-x5n5b\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.523658 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-catalog-content\") pod \"certified-operators-x5n5b\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.557227 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g8mb\" (UniqueName: \"kubernetes.io/projected/dda56b59-9803-4c72-8018-cf68bb4c543c-kube-api-access-2g8mb\") pod \"certified-operators-x5n5b\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:52 crc kubenswrapper[5012]: I0219 06:34:52.696371 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:34:53 crc kubenswrapper[5012]: I0219 06:34:53.220630 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x5n5b"] Feb 19 06:34:54 crc kubenswrapper[5012]: I0219 06:34:54.005106 5012 generic.go:334] "Generic (PLEG): container finished" podID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerID="318b1dbceebe4140b4f2722f73afebdf8799017f86633f49bf31c3697900743c" exitCode=0 Feb 19 06:34:54 crc kubenswrapper[5012]: I0219 06:34:54.005214 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5n5b" event={"ID":"dda56b59-9803-4c72-8018-cf68bb4c543c","Type":"ContainerDied","Data":"318b1dbceebe4140b4f2722f73afebdf8799017f86633f49bf31c3697900743c"} Feb 19 06:34:54 crc kubenswrapper[5012]: I0219 06:34:54.005641 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5n5b" event={"ID":"dda56b59-9803-4c72-8018-cf68bb4c543c","Type":"ContainerStarted","Data":"d7a534106ffeb4a71bd58d47b75e229b45f11ce10463c0bc23fa5f8d7f17d1eb"} Feb 19 06:34:54 crc kubenswrapper[5012]: I0219 06:34:54.009535 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 06:34:56 crc kubenswrapper[5012]: I0219 06:34:56.036467 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5n5b" event={"ID":"dda56b59-9803-4c72-8018-cf68bb4c543c","Type":"ContainerStarted","Data":"f94dfe8658183190b4b7f97cec9a4d23cae32448a7a614b815c534822bd6b108"} Feb 19 06:34:57 crc kubenswrapper[5012]: I0219 06:34:57.052757 5012 generic.go:334] "Generic (PLEG): container finished" podID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerID="f94dfe8658183190b4b7f97cec9a4d23cae32448a7a614b815c534822bd6b108" exitCode=0 Feb 19 06:34:57 crc kubenswrapper[5012]: I0219 06:34:57.052864 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5n5b" event={"ID":"dda56b59-9803-4c72-8018-cf68bb4c543c","Type":"ContainerDied","Data":"f94dfe8658183190b4b7f97cec9a4d23cae32448a7a614b815c534822bd6b108"} Feb 19 06:34:58 crc kubenswrapper[5012]: I0219 06:34:58.068616 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5n5b" event={"ID":"dda56b59-9803-4c72-8018-cf68bb4c543c","Type":"ContainerStarted","Data":"67e74658237f4f28b8c38e733efc54f02924544aaa2ad7c1de72e8a40944c0a7"} Feb 19 06:34:58 crc kubenswrapper[5012]: I0219 06:34:58.098112 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x5n5b" podStartSLOduration=2.643510505 podStartE2EDuration="6.098088373s" podCreationTimestamp="2026-02-19 06:34:52 +0000 UTC" firstStartedPulling="2026-02-19 06:34:54.008109359 +0000 UTC m=+4190.041431968" lastFinishedPulling="2026-02-19 06:34:57.462687227 +0000 UTC m=+4193.496009836" observedRunningTime="2026-02-19 06:34:58.094765762 +0000 UTC m=+4194.128088351" watchObservedRunningTime="2026-02-19 06:34:58.098088373 +0000 UTC m=+4194.131410942" Feb 19 06:35:02 crc kubenswrapper[5012]: I0219 06:35:02.697486 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:35:02 crc kubenswrapper[5012]: I0219 06:35:02.698163 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:35:03 crc kubenswrapper[5012]: I0219 06:35:03.400057 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:35:03 crc kubenswrapper[5012]: I0219 06:35:03.495476 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:35:03 crc kubenswrapper[5012]: I0219 06:35:03.657635 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x5n5b"] Feb 19 06:35:05 crc kubenswrapper[5012]: I0219 06:35:05.152895 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x5n5b" podUID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerName="registry-server" containerID="cri-o://67e74658237f4f28b8c38e733efc54f02924544aaa2ad7c1de72e8a40944c0a7" gracePeriod=2 Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.191624 5012 generic.go:334] "Generic (PLEG): container finished" podID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerID="67e74658237f4f28b8c38e733efc54f02924544aaa2ad7c1de72e8a40944c0a7" exitCode=0 Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.191720 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5n5b" event={"ID":"dda56b59-9803-4c72-8018-cf68bb4c543c","Type":"ContainerDied","Data":"67e74658237f4f28b8c38e733efc54f02924544aaa2ad7c1de72e8a40944c0a7"} Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.696968 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.837851 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g8mb\" (UniqueName: \"kubernetes.io/projected/dda56b59-9803-4c72-8018-cf68bb4c543c-kube-api-access-2g8mb\") pod \"dda56b59-9803-4c72-8018-cf68bb4c543c\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.837964 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-utilities\") pod \"dda56b59-9803-4c72-8018-cf68bb4c543c\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.838059 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-catalog-content\") pod \"dda56b59-9803-4c72-8018-cf68bb4c543c\" (UID: \"dda56b59-9803-4c72-8018-cf68bb4c543c\") " Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.842107 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-utilities" (OuterVolumeSpecName: "utilities") pod "dda56b59-9803-4c72-8018-cf68bb4c543c" (UID: "dda56b59-9803-4c72-8018-cf68bb4c543c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.848139 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dda56b59-9803-4c72-8018-cf68bb4c543c-kube-api-access-2g8mb" (OuterVolumeSpecName: "kube-api-access-2g8mb") pod "dda56b59-9803-4c72-8018-cf68bb4c543c" (UID: "dda56b59-9803-4c72-8018-cf68bb4c543c"). InnerVolumeSpecName "kube-api-access-2g8mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.916419 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dda56b59-9803-4c72-8018-cf68bb4c543c" (UID: "dda56b59-9803-4c72-8018-cf68bb4c543c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.940852 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g8mb\" (UniqueName: \"kubernetes.io/projected/dda56b59-9803-4c72-8018-cf68bb4c543c-kube-api-access-2g8mb\") on node \"crc\" DevicePath \"\"" Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.940883 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:35:07 crc kubenswrapper[5012]: I0219 06:35:07.940899 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda56b59-9803-4c72-8018-cf68bb4c543c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:35:08 crc kubenswrapper[5012]: I0219 06:35:08.209438 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5n5b" event={"ID":"dda56b59-9803-4c72-8018-cf68bb4c543c","Type":"ContainerDied","Data":"d7a534106ffeb4a71bd58d47b75e229b45f11ce10463c0bc23fa5f8d7f17d1eb"} Feb 19 06:35:08 crc kubenswrapper[5012]: I0219 06:35:08.209527 5012 scope.go:117] "RemoveContainer" containerID="67e74658237f4f28b8c38e733efc54f02924544aaa2ad7c1de72e8a40944c0a7" Feb 19 06:35:08 crc kubenswrapper[5012]: I0219 06:35:08.209566 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5n5b" Feb 19 06:35:08 crc kubenswrapper[5012]: I0219 06:35:08.247842 5012 scope.go:117] "RemoveContainer" containerID="f94dfe8658183190b4b7f97cec9a4d23cae32448a7a614b815c534822bd6b108" Feb 19 06:35:08 crc kubenswrapper[5012]: I0219 06:35:08.277490 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x5n5b"] Feb 19 06:35:08 crc kubenswrapper[5012]: I0219 06:35:08.296043 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x5n5b"] Feb 19 06:35:08 crc kubenswrapper[5012]: I0219 06:35:08.302207 5012 scope.go:117] "RemoveContainer" containerID="318b1dbceebe4140b4f2722f73afebdf8799017f86633f49bf31c3697900743c" Feb 19 06:35:08 crc kubenswrapper[5012]: I0219 06:35:08.731003 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dda56b59-9803-4c72-8018-cf68bb4c543c" path="/var/lib/kubelet/pods/dda56b59-9803-4c72-8018-cf68bb4c543c/volumes" Feb 19 06:35:14 crc kubenswrapper[5012]: I0219 06:35:14.431050 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:35:14 crc kubenswrapper[5012]: I0219 06:35:14.431743 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:35:44 crc kubenswrapper[5012]: I0219 06:35:44.431286 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:35:44 crc kubenswrapper[5012]: I0219 06:35:44.432044 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:35:44 crc kubenswrapper[5012]: I0219 06:35:44.432120 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:35:44 crc kubenswrapper[5012]: I0219 06:35:44.433350 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:35:44 crc kubenswrapper[5012]: I0219 06:35:44.433456 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" gracePeriod=600 Feb 19 06:35:44 crc kubenswrapper[5012]: I0219 06:35:44.603645 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" exitCode=0 Feb 19 06:35:44 crc kubenswrapper[5012]: I0219 06:35:44.603854 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0"} Feb 19 06:35:44 crc kubenswrapper[5012]: I0219 06:35:44.604108 5012 scope.go:117] "RemoveContainer" containerID="a8624682a1b0fe5d91d96534d39753294d14b7d998fb6da563d9b8a2dee2a6b7" Feb 19 06:35:45 crc kubenswrapper[5012]: E0219 06:35:45.134575 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:35:45 crc kubenswrapper[5012]: I0219 06:35:45.615233 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:35:45 crc kubenswrapper[5012]: E0219 06:35:45.615918 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:36:00 crc kubenswrapper[5012]: I0219 06:36:00.704509 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:36:00 crc kubenswrapper[5012]: E0219 06:36:00.705607 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:36:11 crc kubenswrapper[5012]: I0219 06:36:11.703236 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:36:11 crc kubenswrapper[5012]: E0219 06:36:11.704373 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:36:22 crc kubenswrapper[5012]: I0219 06:36:22.704802 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:36:22 crc kubenswrapper[5012]: E0219 06:36:22.705813 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:36:34 crc kubenswrapper[5012]: I0219 06:36:34.711445 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:36:34 crc kubenswrapper[5012]: E0219 06:36:34.712427 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:36:49 crc kubenswrapper[5012]: I0219 06:36:49.703771 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:36:49 crc kubenswrapper[5012]: E0219 06:36:49.704997 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:37:00 crc kubenswrapper[5012]: I0219 06:37:00.704185 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:37:00 crc kubenswrapper[5012]: E0219 06:37:00.705425 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:37:12 crc kubenswrapper[5012]: I0219 06:37:12.703876 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:37:12 crc kubenswrapper[5012]: E0219 06:37:12.704754 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:37:27 crc kubenswrapper[5012]: I0219 06:37:27.703621 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:37:27 crc kubenswrapper[5012]: E0219 06:37:27.704560 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:37:39 crc kubenswrapper[5012]: I0219 06:37:39.705107 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:37:39 crc kubenswrapper[5012]: E0219 06:37:39.706164 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:37:52 crc kubenswrapper[5012]: I0219 06:37:52.704356 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:37:52 crc kubenswrapper[5012]: E0219 06:37:52.705485 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:38:07 crc kubenswrapper[5012]: I0219 06:38:07.702834 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:38:07 crc kubenswrapper[5012]: E0219 06:38:07.703512 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:38:21 crc kubenswrapper[5012]: I0219 06:38:21.703112 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:38:21 crc kubenswrapper[5012]: E0219 06:38:21.703803 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:38:32 crc kubenswrapper[5012]: I0219 06:38:32.703361 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:38:32 crc kubenswrapper[5012]: E0219 06:38:32.705882 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:38:46 crc kubenswrapper[5012]: I0219 06:38:46.703002 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:38:46 crc kubenswrapper[5012]: E0219 06:38:46.704187 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:39:00 crc kubenswrapper[5012]: I0219 06:39:00.703390 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:39:00 crc kubenswrapper[5012]: E0219 06:39:00.704318 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:39:11 crc kubenswrapper[5012]: I0219 06:39:11.702952 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:39:11 crc kubenswrapper[5012]: E0219 06:39:11.704023 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:39:25 crc kubenswrapper[5012]: I0219 06:39:25.703772 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:39:25 crc kubenswrapper[5012]: E0219 06:39:25.704963 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:39:36 crc kubenswrapper[5012]: I0219 06:39:36.703627 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:39:36 crc kubenswrapper[5012]: E0219 06:39:36.704705 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:39:47 crc kubenswrapper[5012]: I0219 06:39:47.703653 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:39:47 crc kubenswrapper[5012]: E0219 06:39:47.705363 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:40:01 crc kubenswrapper[5012]: I0219 06:40:01.703399 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:40:01 crc kubenswrapper[5012]: E0219 06:40:01.704259 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.053736 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vlg7v"] Feb 19 06:40:02 crc kubenswrapper[5012]: E0219 06:40:02.054291 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerName="registry-server" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.054327 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerName="registry-server" Feb 19 06:40:02 crc kubenswrapper[5012]: E0219 06:40:02.054344 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerName="extract-content" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.054352 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerName="extract-content" Feb 19 06:40:02 crc kubenswrapper[5012]: E0219 06:40:02.054368 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerName="extract-utilities" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.054377 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerName="extract-utilities" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.054666 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="dda56b59-9803-4c72-8018-cf68bb4c543c" containerName="registry-server" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.056705 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.091547 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vlg7v"] Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.231649 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-catalog-content\") pod \"community-operators-vlg7v\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.231943 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-utilities\") pod \"community-operators-vlg7v\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.232021 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vvrx\" (UniqueName: \"kubernetes.io/projected/e5d3e688-640e-4cd4-9729-8405195032a3-kube-api-access-7vvrx\") pod \"community-operators-vlg7v\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.334395 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-catalog-content\") pod \"community-operators-vlg7v\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.334854 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-utilities\") pod \"community-operators-vlg7v\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.334901 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vvrx\" (UniqueName: \"kubernetes.io/projected/e5d3e688-640e-4cd4-9729-8405195032a3-kube-api-access-7vvrx\") pod \"community-operators-vlg7v\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.334938 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-catalog-content\") pod \"community-operators-vlg7v\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.335324 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-utilities\") pod \"community-operators-vlg7v\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.354922 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vvrx\" (UniqueName: \"kubernetes.io/projected/e5d3e688-640e-4cd4-9729-8405195032a3-kube-api-access-7vvrx\") pod \"community-operators-vlg7v\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.408894 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:02 crc kubenswrapper[5012]: I0219 06:40:02.907423 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vlg7v"] Feb 19 06:40:03 crc kubenswrapper[5012]: I0219 06:40:03.746694 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlg7v" event={"ID":"e5d3e688-640e-4cd4-9729-8405195032a3","Type":"ContainerStarted","Data":"7754906bff20c85446814ba1839e8c3dc453d223f63e48b12a1c0b3a943d78c1"} Feb 19 06:40:04 crc kubenswrapper[5012]: I0219 06:40:04.763296 5012 generic.go:334] "Generic (PLEG): container finished" podID="e5d3e688-640e-4cd4-9729-8405195032a3" containerID="edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb" exitCode=0 Feb 19 06:40:04 crc kubenswrapper[5012]: I0219 06:40:04.763391 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlg7v" event={"ID":"e5d3e688-640e-4cd4-9729-8405195032a3","Type":"ContainerDied","Data":"edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb"} Feb 19 06:40:04 crc kubenswrapper[5012]: I0219 06:40:04.766852 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 06:40:05 crc kubenswrapper[5012]: I0219 06:40:05.779664 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlg7v" event={"ID":"e5d3e688-640e-4cd4-9729-8405195032a3","Type":"ContainerStarted","Data":"d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251"} Feb 19 06:40:06 crc kubenswrapper[5012]: I0219 06:40:06.800220 5012 generic.go:334] "Generic (PLEG): container finished" podID="e5d3e688-640e-4cd4-9729-8405195032a3" containerID="d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251" exitCode=0 Feb 19 06:40:06 crc kubenswrapper[5012]: I0219 06:40:06.800291 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlg7v" event={"ID":"e5d3e688-640e-4cd4-9729-8405195032a3","Type":"ContainerDied","Data":"d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251"} Feb 19 06:40:07 crc kubenswrapper[5012]: I0219 06:40:07.818853 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlg7v" event={"ID":"e5d3e688-640e-4cd4-9729-8405195032a3","Type":"ContainerStarted","Data":"785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998"} Feb 19 06:40:07 crc kubenswrapper[5012]: I0219 06:40:07.857674 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vlg7v" podStartSLOduration=3.381970489 podStartE2EDuration="5.857651935s" podCreationTimestamp="2026-02-19 06:40:02 +0000 UTC" firstStartedPulling="2026-02-19 06:40:04.765940286 +0000 UTC m=+4500.799262895" lastFinishedPulling="2026-02-19 06:40:07.241621772 +0000 UTC m=+4503.274944341" observedRunningTime="2026-02-19 06:40:07.845795021 +0000 UTC m=+4503.879117620" watchObservedRunningTime="2026-02-19 06:40:07.857651935 +0000 UTC m=+4503.890974524" Feb 19 06:40:12 crc kubenswrapper[5012]: I0219 06:40:12.409681 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:12 crc kubenswrapper[5012]: I0219 06:40:12.410468 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:12 crc kubenswrapper[5012]: I0219 06:40:12.494474 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:12 crc kubenswrapper[5012]: I0219 06:40:12.934499 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:12 crc kubenswrapper[5012]: I0219 06:40:12.995479 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vlg7v"] Feb 19 06:40:14 crc kubenswrapper[5012]: I0219 06:40:14.908598 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vlg7v" podUID="e5d3e688-640e-4cd4-9729-8405195032a3" containerName="registry-server" containerID="cri-o://785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998" gracePeriod=2 Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.410334 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.570929 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-utilities\") pod \"e5d3e688-640e-4cd4-9729-8405195032a3\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.571151 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vvrx\" (UniqueName: \"kubernetes.io/projected/e5d3e688-640e-4cd4-9729-8405195032a3-kube-api-access-7vvrx\") pod \"e5d3e688-640e-4cd4-9729-8405195032a3\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.571427 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-catalog-content\") pod \"e5d3e688-640e-4cd4-9729-8405195032a3\" (UID: \"e5d3e688-640e-4cd4-9729-8405195032a3\") " Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.572906 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-utilities" (OuterVolumeSpecName: "utilities") pod "e5d3e688-640e-4cd4-9729-8405195032a3" (UID: "e5d3e688-640e-4cd4-9729-8405195032a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.581376 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5d3e688-640e-4cd4-9729-8405195032a3-kube-api-access-7vvrx" (OuterVolumeSpecName: "kube-api-access-7vvrx") pod "e5d3e688-640e-4cd4-9729-8405195032a3" (UID: "e5d3e688-640e-4cd4-9729-8405195032a3"). InnerVolumeSpecName "kube-api-access-7vvrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.651295 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5d3e688-640e-4cd4-9729-8405195032a3" (UID: "e5d3e688-640e-4cd4-9729-8405195032a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.674065 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vvrx\" (UniqueName: \"kubernetes.io/projected/e5d3e688-640e-4cd4-9729-8405195032a3-kube-api-access-7vvrx\") on node \"crc\" DevicePath \"\"" Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.674111 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.674130 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5d3e688-640e-4cd4-9729-8405195032a3-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.924609 5012 generic.go:334] "Generic (PLEG): container finished" podID="e5d3e688-640e-4cd4-9729-8405195032a3" containerID="785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998" exitCode=0 Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.924746 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vlg7v" Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.924738 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlg7v" event={"ID":"e5d3e688-640e-4cd4-9729-8405195032a3","Type":"ContainerDied","Data":"785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998"} Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.925783 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vlg7v" event={"ID":"e5d3e688-640e-4cd4-9729-8405195032a3","Type":"ContainerDied","Data":"7754906bff20c85446814ba1839e8c3dc453d223f63e48b12a1c0b3a943d78c1"} Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.925817 5012 scope.go:117] "RemoveContainer" containerID="785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998" Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.960526 5012 scope.go:117] "RemoveContainer" containerID="d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251" Feb 19 06:40:15 crc kubenswrapper[5012]: I0219 06:40:15.995827 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vlg7v"] Feb 19 06:40:16 crc kubenswrapper[5012]: I0219 06:40:16.004088 5012 scope.go:117] "RemoveContainer" containerID="edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb" Feb 19 06:40:16 crc kubenswrapper[5012]: I0219 06:40:16.012382 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vlg7v"] Feb 19 06:40:16 crc kubenswrapper[5012]: I0219 06:40:16.079390 5012 scope.go:117] "RemoveContainer" containerID="785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998" Feb 19 06:40:16 crc kubenswrapper[5012]: E0219 06:40:16.079849 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998\": container with ID starting with 785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998 not found: ID does not exist" containerID="785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998" Feb 19 06:40:16 crc kubenswrapper[5012]: I0219 06:40:16.079908 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998"} err="failed to get container status \"785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998\": rpc error: code = NotFound desc = could not find container \"785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998\": container with ID starting with 785752dc08651df850818b3bba2122dd2adae6e69d2fdf540c19ca64d531b998 not found: ID does not exist" Feb 19 06:40:16 crc kubenswrapper[5012]: I0219 06:40:16.079944 5012 scope.go:117] "RemoveContainer" containerID="d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251" Feb 19 06:40:16 crc kubenswrapper[5012]: E0219 06:40:16.080621 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251\": container with ID starting with d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251 not found: ID does not exist" containerID="d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251" Feb 19 06:40:16 crc kubenswrapper[5012]: I0219 06:40:16.080661 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251"} err="failed to get container status \"d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251\": rpc error: code = NotFound desc = could not find container \"d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251\": container with ID starting with d6787a9ff56d6b3eae24d3f28d591fbda41dc6d796a69b8e379731cb52b17251 not found: ID does not exist" Feb 19 06:40:16 crc kubenswrapper[5012]: I0219 06:40:16.080690 5012 scope.go:117] "RemoveContainer" containerID="edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb" Feb 19 06:40:16 crc kubenswrapper[5012]: E0219 06:40:16.082818 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb\": container with ID starting with edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb not found: ID does not exist" containerID="edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb" Feb 19 06:40:16 crc kubenswrapper[5012]: I0219 06:40:16.082851 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb"} err="failed to get container status \"edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb\": rpc error: code = NotFound desc = could not find container \"edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb\": container with ID starting with edd2f63b1acd26d06ce778a6204b0d0765b1a2c00b4d909c17e0d4c2893f4acb not found: ID does not exist" Feb 19 06:40:16 crc kubenswrapper[5012]: I0219 06:40:16.703740 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:40:16 crc kubenswrapper[5012]: E0219 06:40:16.704760 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:40:16 crc kubenswrapper[5012]: I0219 06:40:16.732500 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5d3e688-640e-4cd4-9729-8405195032a3" path="/var/lib/kubelet/pods/e5d3e688-640e-4cd4-9729-8405195032a3/volumes" Feb 19 06:40:29 crc kubenswrapper[5012]: I0219 06:40:29.703236 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:40:29 crc kubenswrapper[5012]: E0219 06:40:29.704164 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:40:43 crc kubenswrapper[5012]: I0219 06:40:43.703243 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:40:43 crc kubenswrapper[5012]: E0219 06:40:43.704366 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:40:56 crc kubenswrapper[5012]: I0219 06:40:56.703533 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:40:57 crc kubenswrapper[5012]: I0219 06:40:57.622611 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"00e1065f77f9e7e865aaa5f4d131bac2bd7836a8c4264f89c263a864c8ce750f"} Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.413350 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tf4n4"] Feb 19 06:41:22 crc kubenswrapper[5012]: E0219 06:41:22.414438 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d3e688-640e-4cd4-9729-8405195032a3" containerName="registry-server" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.414456 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d3e688-640e-4cd4-9729-8405195032a3" containerName="registry-server" Feb 19 06:41:22 crc kubenswrapper[5012]: E0219 06:41:22.414489 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d3e688-640e-4cd4-9729-8405195032a3" containerName="extract-content" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.414497 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d3e688-640e-4cd4-9729-8405195032a3" containerName="extract-content" Feb 19 06:41:22 crc kubenswrapper[5012]: E0219 06:41:22.414535 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d3e688-640e-4cd4-9729-8405195032a3" containerName="extract-utilities" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.414543 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d3e688-640e-4cd4-9729-8405195032a3" containerName="extract-utilities" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.414794 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5d3e688-640e-4cd4-9729-8405195032a3" containerName="registry-server" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.416545 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.454685 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tf4n4"] Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.499196 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-catalog-content\") pod \"redhat-marketplace-tf4n4\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.499584 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kld54\" (UniqueName: \"kubernetes.io/projected/6841b536-8c98-4aad-9989-b588d892ff31-kube-api-access-kld54\") pod \"redhat-marketplace-tf4n4\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.499639 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-utilities\") pod \"redhat-marketplace-tf4n4\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.602291 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kld54\" (UniqueName: \"kubernetes.io/projected/6841b536-8c98-4aad-9989-b588d892ff31-kube-api-access-kld54\") pod \"redhat-marketplace-tf4n4\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.602407 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-utilities\") pod \"redhat-marketplace-tf4n4\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.602499 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-catalog-content\") pod \"redhat-marketplace-tf4n4\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.603086 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-utilities\") pod \"redhat-marketplace-tf4n4\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.603086 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-catalog-content\") pod \"redhat-marketplace-tf4n4\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.628745 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kld54\" (UniqueName: \"kubernetes.io/projected/6841b536-8c98-4aad-9989-b588d892ff31-kube-api-access-kld54\") pod \"redhat-marketplace-tf4n4\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:22 crc kubenswrapper[5012]: I0219 06:41:22.746182 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:23 crc kubenswrapper[5012]: I0219 06:41:23.230714 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tf4n4"] Feb 19 06:41:23 crc kubenswrapper[5012]: I0219 06:41:23.909363 5012 generic.go:334] "Generic (PLEG): container finished" podID="6841b536-8c98-4aad-9989-b588d892ff31" containerID="d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166" exitCode=0 Feb 19 06:41:23 crc kubenswrapper[5012]: I0219 06:41:23.909451 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tf4n4" event={"ID":"6841b536-8c98-4aad-9989-b588d892ff31","Type":"ContainerDied","Data":"d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166"} Feb 19 06:41:23 crc kubenswrapper[5012]: I0219 06:41:23.909632 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tf4n4" event={"ID":"6841b536-8c98-4aad-9989-b588d892ff31","Type":"ContainerStarted","Data":"942c30e93ef58c6bb6ca3087e3a4abe40e69b68204d55971a374b9fa9499bfc6"} Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.616657 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-66cmz"] Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.621860 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.662136 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-66cmz"] Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.747650 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-utilities\") pod \"redhat-operators-66cmz\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.747732 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ghkp\" (UniqueName: \"kubernetes.io/projected/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-kube-api-access-2ghkp\") pod \"redhat-operators-66cmz\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.748239 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-catalog-content\") pod \"redhat-operators-66cmz\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.851357 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-catalog-content\") pod \"redhat-operators-66cmz\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.851603 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-utilities\") pod \"redhat-operators-66cmz\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.851733 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ghkp\" (UniqueName: \"kubernetes.io/projected/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-kube-api-access-2ghkp\") pod \"redhat-operators-66cmz\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.852750 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-utilities\") pod \"redhat-operators-66cmz\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.853543 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-catalog-content\") pod \"redhat-operators-66cmz\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.879666 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ghkp\" (UniqueName: \"kubernetes.io/projected/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-kube-api-access-2ghkp\") pod \"redhat-operators-66cmz\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:24 crc kubenswrapper[5012]: I0219 06:41:24.972468 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:25 crc kubenswrapper[5012]: I0219 06:41:25.506879 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-66cmz"] Feb 19 06:41:25 crc kubenswrapper[5012]: I0219 06:41:25.931866 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tf4n4" event={"ID":"6841b536-8c98-4aad-9989-b588d892ff31","Type":"ContainerStarted","Data":"baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438"} Feb 19 06:41:25 crc kubenswrapper[5012]: I0219 06:41:25.935557 5012 generic.go:334] "Generic (PLEG): container finished" podID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerID="a87d0a7fdd603f20783ea6ff56dc5dc304c40610e8dbadc8d1e3e50edb7634ba" exitCode=0 Feb 19 06:41:25 crc kubenswrapper[5012]: I0219 06:41:25.935607 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66cmz" event={"ID":"61ce3f5e-028c-4b7f-a71e-a92e3d856c23","Type":"ContainerDied","Data":"a87d0a7fdd603f20783ea6ff56dc5dc304c40610e8dbadc8d1e3e50edb7634ba"} Feb 19 06:41:25 crc kubenswrapper[5012]: I0219 06:41:25.935663 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66cmz" event={"ID":"61ce3f5e-028c-4b7f-a71e-a92e3d856c23","Type":"ContainerStarted","Data":"2624cc6ec470f8dc84d3c7b03def5d04d47434c9b3d526915845fbd6837bcee1"} Feb 19 06:41:26 crc kubenswrapper[5012]: I0219 06:41:26.945627 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66cmz" event={"ID":"61ce3f5e-028c-4b7f-a71e-a92e3d856c23","Type":"ContainerStarted","Data":"c02346b5da5965008cd2f931a88dfe00233b3fb1cb1efe9113a690b73d595fc4"} Feb 19 06:41:26 crc kubenswrapper[5012]: I0219 06:41:26.948099 5012 generic.go:334] "Generic (PLEG): container finished" podID="6841b536-8c98-4aad-9989-b588d892ff31" containerID="baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438" exitCode=0 Feb 19 06:41:26 crc kubenswrapper[5012]: I0219 06:41:26.948133 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tf4n4" event={"ID":"6841b536-8c98-4aad-9989-b588d892ff31","Type":"ContainerDied","Data":"baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438"} Feb 19 06:41:28 crc kubenswrapper[5012]: I0219 06:41:28.965904 5012 generic.go:334] "Generic (PLEG): container finished" podID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerID="c02346b5da5965008cd2f931a88dfe00233b3fb1cb1efe9113a690b73d595fc4" exitCode=0 Feb 19 06:41:28 crc kubenswrapper[5012]: I0219 06:41:28.966022 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66cmz" event={"ID":"61ce3f5e-028c-4b7f-a71e-a92e3d856c23","Type":"ContainerDied","Data":"c02346b5da5965008cd2f931a88dfe00233b3fb1cb1efe9113a690b73d595fc4"} Feb 19 06:41:28 crc kubenswrapper[5012]: I0219 06:41:28.970791 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tf4n4" event={"ID":"6841b536-8c98-4aad-9989-b588d892ff31","Type":"ContainerStarted","Data":"61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148"} Feb 19 06:41:29 crc kubenswrapper[5012]: I0219 06:41:29.010038 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tf4n4" podStartSLOduration=2.993090319 podStartE2EDuration="7.010022682s" podCreationTimestamp="2026-02-19 06:41:22 +0000 UTC" firstStartedPulling="2026-02-19 06:41:23.911888223 +0000 UTC m=+4579.945210832" lastFinishedPulling="2026-02-19 06:41:27.928820626 +0000 UTC m=+4583.962143195" observedRunningTime="2026-02-19 06:41:29.004882709 +0000 UTC m=+4585.038205278" watchObservedRunningTime="2026-02-19 06:41:29.010022682 +0000 UTC m=+4585.043345251" Feb 19 06:41:31 crc kubenswrapper[5012]: I0219 06:41:31.004020 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66cmz" event={"ID":"61ce3f5e-028c-4b7f-a71e-a92e3d856c23","Type":"ContainerStarted","Data":"24edfd5d55e482195adaebf9b5b91b0fd7806997553ad21de551686b0c104daa"} Feb 19 06:41:31 crc kubenswrapper[5012]: I0219 06:41:31.041070 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-66cmz" podStartSLOduration=3.470736904 podStartE2EDuration="7.041040448s" podCreationTimestamp="2026-02-19 06:41:24 +0000 UTC" firstStartedPulling="2026-02-19 06:41:25.93791197 +0000 UTC m=+4581.971234539" lastFinishedPulling="2026-02-19 06:41:29.508215504 +0000 UTC m=+4585.541538083" observedRunningTime="2026-02-19 06:41:31.031494229 +0000 UTC m=+4587.064816808" watchObservedRunningTime="2026-02-19 06:41:31.041040448 +0000 UTC m=+4587.074363057" Feb 19 06:41:32 crc kubenswrapper[5012]: I0219 06:41:32.747408 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:32 crc kubenswrapper[5012]: I0219 06:41:32.747944 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:32 crc kubenswrapper[5012]: I0219 06:41:32.816406 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:33 crc kubenswrapper[5012]: I0219 06:41:33.121694 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:34 crc kubenswrapper[5012]: I0219 06:41:34.794666 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tf4n4"] Feb 19 06:41:34 crc kubenswrapper[5012]: I0219 06:41:34.973835 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:34 crc kubenswrapper[5012]: I0219 06:41:34.974118 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.053982 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tf4n4" podUID="6841b536-8c98-4aad-9989-b588d892ff31" containerName="registry-server" containerID="cri-o://61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148" gracePeriod=2 Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.611028 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.671840 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kld54\" (UniqueName: \"kubernetes.io/projected/6841b536-8c98-4aad-9989-b588d892ff31-kube-api-access-kld54\") pod \"6841b536-8c98-4aad-9989-b588d892ff31\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.671993 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-utilities\") pod \"6841b536-8c98-4aad-9989-b588d892ff31\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.672094 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-catalog-content\") pod \"6841b536-8c98-4aad-9989-b588d892ff31\" (UID: \"6841b536-8c98-4aad-9989-b588d892ff31\") " Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.673435 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-utilities" (OuterVolumeSpecName: "utilities") pod "6841b536-8c98-4aad-9989-b588d892ff31" (UID: "6841b536-8c98-4aad-9989-b588d892ff31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.682062 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6841b536-8c98-4aad-9989-b588d892ff31-kube-api-access-kld54" (OuterVolumeSpecName: "kube-api-access-kld54") pod "6841b536-8c98-4aad-9989-b588d892ff31" (UID: "6841b536-8c98-4aad-9989-b588d892ff31"). InnerVolumeSpecName "kube-api-access-kld54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.722186 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6841b536-8c98-4aad-9989-b588d892ff31" (UID: "6841b536-8c98-4aad-9989-b588d892ff31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.775746 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kld54\" (UniqueName: \"kubernetes.io/projected/6841b536-8c98-4aad-9989-b588d892ff31-kube-api-access-kld54\") on node \"crc\" DevicePath \"\"" Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.775804 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:41:35 crc kubenswrapper[5012]: I0219 06:41:35.775824 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6841b536-8c98-4aad-9989-b588d892ff31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.028891 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-66cmz" podUID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerName="registry-server" probeResult="failure" output=< Feb 19 06:41:36 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 06:41:36 crc kubenswrapper[5012]: > Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.066443 5012 generic.go:334] "Generic (PLEG): container finished" podID="6841b536-8c98-4aad-9989-b588d892ff31" containerID="61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148" exitCode=0 Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.066486 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tf4n4" event={"ID":"6841b536-8c98-4aad-9989-b588d892ff31","Type":"ContainerDied","Data":"61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148"} Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.066514 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tf4n4" event={"ID":"6841b536-8c98-4aad-9989-b588d892ff31","Type":"ContainerDied","Data":"942c30e93ef58c6bb6ca3087e3a4abe40e69b68204d55971a374b9fa9499bfc6"} Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.066532 5012 scope.go:117] "RemoveContainer" containerID="61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.067022 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tf4n4" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.090608 5012 scope.go:117] "RemoveContainer" containerID="baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.109336 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tf4n4"] Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.123088 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tf4n4"] Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.146091 5012 scope.go:117] "RemoveContainer" containerID="d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.173913 5012 scope.go:117] "RemoveContainer" containerID="61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148" Feb 19 06:41:36 crc kubenswrapper[5012]: E0219 06:41:36.174498 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148\": container with ID starting with 61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148 not found: ID does not exist" containerID="61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.174548 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148"} err="failed to get container status \"61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148\": rpc error: code = NotFound desc = could not find container \"61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148\": container with ID starting with 61e0ad9d821c00b47e2942bfeba986d48ecd2623a343c22d6c5a33aa672d3148 not found: ID does not exist" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.174575 5012 scope.go:117] "RemoveContainer" containerID="baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438" Feb 19 06:41:36 crc kubenswrapper[5012]: E0219 06:41:36.175022 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438\": container with ID starting with baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438 not found: ID does not exist" containerID="baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.175058 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438"} err="failed to get container status \"baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438\": rpc error: code = NotFound desc = could not find container \"baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438\": container with ID starting with baff5d8c6f695a23c043a049390ef365e55a4cd90ca7e22ff79cdeb3fed2a438 not found: ID does not exist" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.175082 5012 scope.go:117] "RemoveContainer" containerID="d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166" Feb 19 06:41:36 crc kubenswrapper[5012]: E0219 06:41:36.175373 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166\": container with ID starting with d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166 not found: ID does not exist" containerID="d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.175397 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166"} err="failed to get container status \"d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166\": rpc error: code = NotFound desc = could not find container \"d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166\": container with ID starting with d5ecdc7ff415eb3ad472e9f14983b0b4cf3d21e1bb9e5234ee0e47445a18f166 not found: ID does not exist" Feb 19 06:41:36 crc kubenswrapper[5012]: I0219 06:41:36.723770 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6841b536-8c98-4aad-9989-b588d892ff31" path="/var/lib/kubelet/pods/6841b536-8c98-4aad-9989-b588d892ff31/volumes" Feb 19 06:41:43 crc kubenswrapper[5012]: I0219 06:41:43.726693 5012 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="1fd0c672-e258-4feb-8bbd-26135f92f7fb" containerName="galera" probeResult="failure" output="command timed out" Feb 19 06:41:43 crc kubenswrapper[5012]: I0219 06:41:43.726873 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="1fd0c672-e258-4feb-8bbd-26135f92f7fb" containerName="galera" probeResult="failure" output="command timed out" Feb 19 06:41:45 crc kubenswrapper[5012]: I0219 06:41:45.076288 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:45 crc kubenswrapper[5012]: I0219 06:41:45.151154 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:45 crc kubenswrapper[5012]: I0219 06:41:45.366106 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-66cmz"] Feb 19 06:41:46 crc kubenswrapper[5012]: I0219 06:41:46.199117 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-66cmz" podUID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerName="registry-server" containerID="cri-o://24edfd5d55e482195adaebf9b5b91b0fd7806997553ad21de551686b0c104daa" gracePeriod=2 Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.210951 5012 generic.go:334] "Generic (PLEG): container finished" podID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerID="24edfd5d55e482195adaebf9b5b91b0fd7806997553ad21de551686b0c104daa" exitCode=0 Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.211039 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66cmz" event={"ID":"61ce3f5e-028c-4b7f-a71e-a92e3d856c23","Type":"ContainerDied","Data":"24edfd5d55e482195adaebf9b5b91b0fd7806997553ad21de551686b0c104daa"} Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.712801 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.815336 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-utilities\") pod \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.815404 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ghkp\" (UniqueName: \"kubernetes.io/projected/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-kube-api-access-2ghkp\") pod \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.815460 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-catalog-content\") pod \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\" (UID: \"61ce3f5e-028c-4b7f-a71e-a92e3d856c23\") " Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.818003 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-utilities" (OuterVolumeSpecName: "utilities") pod "61ce3f5e-028c-4b7f-a71e-a92e3d856c23" (UID: "61ce3f5e-028c-4b7f-a71e-a92e3d856c23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.827831 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-kube-api-access-2ghkp" (OuterVolumeSpecName: "kube-api-access-2ghkp") pod "61ce3f5e-028c-4b7f-a71e-a92e3d856c23" (UID: "61ce3f5e-028c-4b7f-a71e-a92e3d856c23"). InnerVolumeSpecName "kube-api-access-2ghkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.918472 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.918530 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ghkp\" (UniqueName: \"kubernetes.io/projected/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-kube-api-access-2ghkp\") on node \"crc\" DevicePath \"\"" Feb 19 06:41:47 crc kubenswrapper[5012]: I0219 06:41:47.936167 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61ce3f5e-028c-4b7f-a71e-a92e3d856c23" (UID: "61ce3f5e-028c-4b7f-a71e-a92e3d856c23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:41:48 crc kubenswrapper[5012]: I0219 06:41:48.020707 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61ce3f5e-028c-4b7f-a71e-a92e3d856c23-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:41:48 crc kubenswrapper[5012]: I0219 06:41:48.227455 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66cmz" event={"ID":"61ce3f5e-028c-4b7f-a71e-a92e3d856c23","Type":"ContainerDied","Data":"2624cc6ec470f8dc84d3c7b03def5d04d47434c9b3d526915845fbd6837bcee1"} Feb 19 06:41:48 crc kubenswrapper[5012]: I0219 06:41:48.227535 5012 scope.go:117] "RemoveContainer" containerID="24edfd5d55e482195adaebf9b5b91b0fd7806997553ad21de551686b0c104daa" Feb 19 06:41:48 crc kubenswrapper[5012]: I0219 06:41:48.227574 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66cmz" Feb 19 06:41:48 crc kubenswrapper[5012]: I0219 06:41:48.271057 5012 scope.go:117] "RemoveContainer" containerID="c02346b5da5965008cd2f931a88dfe00233b3fb1cb1efe9113a690b73d595fc4" Feb 19 06:41:48 crc kubenswrapper[5012]: I0219 06:41:48.281712 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-66cmz"] Feb 19 06:41:48 crc kubenswrapper[5012]: I0219 06:41:48.291873 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-66cmz"] Feb 19 06:41:48 crc kubenswrapper[5012]: I0219 06:41:48.310000 5012 scope.go:117] "RemoveContainer" containerID="a87d0a7fdd603f20783ea6ff56dc5dc304c40610e8dbadc8d1e3e50edb7634ba" Feb 19 06:41:48 crc kubenswrapper[5012]: I0219 06:41:48.727840 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" path="/var/lib/kubelet/pods/61ce3f5e-028c-4b7f-a71e-a92e3d856c23/volumes" Feb 19 06:43:14 crc kubenswrapper[5012]: I0219 06:43:14.431249 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:43:14 crc kubenswrapper[5012]: I0219 06:43:14.431924 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:43:44 crc kubenswrapper[5012]: I0219 06:43:44.431431 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:43:44 crc kubenswrapper[5012]: I0219 06:43:44.432783 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:44:14 crc kubenswrapper[5012]: I0219 06:44:14.431169 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:44:14 crc kubenswrapper[5012]: I0219 06:44:14.431932 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:44:14 crc kubenswrapper[5012]: I0219 06:44:14.432005 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:44:14 crc kubenswrapper[5012]: I0219 06:44:14.433134 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00e1065f77f9e7e865aaa5f4d131bac2bd7836a8c4264f89c263a864c8ce750f"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:44:14 crc kubenswrapper[5012]: I0219 06:44:14.433287 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://00e1065f77f9e7e865aaa5f4d131bac2bd7836a8c4264f89c263a864c8ce750f" gracePeriod=600 Feb 19 06:44:14 crc kubenswrapper[5012]: I0219 06:44:14.899885 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="00e1065f77f9e7e865aaa5f4d131bac2bd7836a8c4264f89c263a864c8ce750f" exitCode=0 Feb 19 06:44:14 crc kubenswrapper[5012]: I0219 06:44:14.899984 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"00e1065f77f9e7e865aaa5f4d131bac2bd7836a8c4264f89c263a864c8ce750f"} Feb 19 06:44:14 crc kubenswrapper[5012]: I0219 06:44:14.900427 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3"} Feb 19 06:44:14 crc kubenswrapper[5012]: I0219 06:44:14.900469 5012 scope.go:117] "RemoveContainer" containerID="2cde42ad7625ac0c8d29252795b50e87ace740bb5321ae6f99cac64886f075c0" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.173569 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx"] Feb 19 06:45:00 crc kubenswrapper[5012]: E0219 06:45:00.175260 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6841b536-8c98-4aad-9989-b588d892ff31" containerName="extract-utilities" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.175294 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6841b536-8c98-4aad-9989-b588d892ff31" containerName="extract-utilities" Feb 19 06:45:00 crc kubenswrapper[5012]: E0219 06:45:00.175369 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerName="extract-utilities" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.175386 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerName="extract-utilities" Feb 19 06:45:00 crc kubenswrapper[5012]: E0219 06:45:00.175429 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6841b536-8c98-4aad-9989-b588d892ff31" containerName="registry-server" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.175448 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6841b536-8c98-4aad-9989-b588d892ff31" containerName="registry-server" Feb 19 06:45:00 crc kubenswrapper[5012]: E0219 06:45:00.175473 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6841b536-8c98-4aad-9989-b588d892ff31" containerName="extract-content" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.175488 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6841b536-8c98-4aad-9989-b588d892ff31" containerName="extract-content" Feb 19 06:45:00 crc kubenswrapper[5012]: E0219 06:45:00.175526 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerName="registry-server" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.175542 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerName="registry-server" Feb 19 06:45:00 crc kubenswrapper[5012]: E0219 06:45:00.175600 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerName="extract-content" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.175618 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerName="extract-content" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.176062 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="61ce3f5e-028c-4b7f-a71e-a92e3d856c23" containerName="registry-server" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.176141 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6841b536-8c98-4aad-9989-b588d892ff31" containerName="registry-server" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.177646 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.180903 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.182362 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.187809 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx"] Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.327939 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82vp2\" (UniqueName: \"kubernetes.io/projected/a162758d-5bc7-4bb8-949c-e32d2f33a380-kube-api-access-82vp2\") pod \"collect-profiles-29524725-rr4lx\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.328071 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a162758d-5bc7-4bb8-949c-e32d2f33a380-secret-volume\") pod \"collect-profiles-29524725-rr4lx\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.328117 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a162758d-5bc7-4bb8-949c-e32d2f33a380-config-volume\") pod \"collect-profiles-29524725-rr4lx\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.430234 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82vp2\" (UniqueName: \"kubernetes.io/projected/a162758d-5bc7-4bb8-949c-e32d2f33a380-kube-api-access-82vp2\") pod \"collect-profiles-29524725-rr4lx\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.430414 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a162758d-5bc7-4bb8-949c-e32d2f33a380-secret-volume\") pod \"collect-profiles-29524725-rr4lx\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.430485 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a162758d-5bc7-4bb8-949c-e32d2f33a380-config-volume\") pod \"collect-profiles-29524725-rr4lx\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.431745 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a162758d-5bc7-4bb8-949c-e32d2f33a380-config-volume\") pod \"collect-profiles-29524725-rr4lx\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.440013 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a162758d-5bc7-4bb8-949c-e32d2f33a380-secret-volume\") pod \"collect-profiles-29524725-rr4lx\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.457212 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82vp2\" (UniqueName: \"kubernetes.io/projected/a162758d-5bc7-4bb8-949c-e32d2f33a380-kube-api-access-82vp2\") pod \"collect-profiles-29524725-rr4lx\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.509453 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:00 crc kubenswrapper[5012]: I0219 06:45:00.826545 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx"] Feb 19 06:45:01 crc kubenswrapper[5012]: I0219 06:45:01.593490 5012 generic.go:334] "Generic (PLEG): container finished" podID="a162758d-5bc7-4bb8-949c-e32d2f33a380" containerID="23b98103b785157c94165d83573686c65c77cd8c456afe6a275faa1a2a6f6d07" exitCode=0 Feb 19 06:45:01 crc kubenswrapper[5012]: I0219 06:45:01.593705 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" event={"ID":"a162758d-5bc7-4bb8-949c-e32d2f33a380","Type":"ContainerDied","Data":"23b98103b785157c94165d83573686c65c77cd8c456afe6a275faa1a2a6f6d07"} Feb 19 06:45:01 crc kubenswrapper[5012]: I0219 06:45:01.594031 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" event={"ID":"a162758d-5bc7-4bb8-949c-e32d2f33a380","Type":"ContainerStarted","Data":"6f1bbee1a753485477679410e5d6880b9435760f5bdd367bcbd68fdfbc9f18e8"} Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.008061 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.190559 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a162758d-5bc7-4bb8-949c-e32d2f33a380-secret-volume\") pod \"a162758d-5bc7-4bb8-949c-e32d2f33a380\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.190813 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a162758d-5bc7-4bb8-949c-e32d2f33a380-config-volume\") pod \"a162758d-5bc7-4bb8-949c-e32d2f33a380\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.190851 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82vp2\" (UniqueName: \"kubernetes.io/projected/a162758d-5bc7-4bb8-949c-e32d2f33a380-kube-api-access-82vp2\") pod \"a162758d-5bc7-4bb8-949c-e32d2f33a380\" (UID: \"a162758d-5bc7-4bb8-949c-e32d2f33a380\") " Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.192176 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a162758d-5bc7-4bb8-949c-e32d2f33a380-config-volume" (OuterVolumeSpecName: "config-volume") pod "a162758d-5bc7-4bb8-949c-e32d2f33a380" (UID: "a162758d-5bc7-4bb8-949c-e32d2f33a380"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.197197 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a162758d-5bc7-4bb8-949c-e32d2f33a380-kube-api-access-82vp2" (OuterVolumeSpecName: "kube-api-access-82vp2") pod "a162758d-5bc7-4bb8-949c-e32d2f33a380" (UID: "a162758d-5bc7-4bb8-949c-e32d2f33a380"). InnerVolumeSpecName "kube-api-access-82vp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.197430 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a162758d-5bc7-4bb8-949c-e32d2f33a380-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a162758d-5bc7-4bb8-949c-e32d2f33a380" (UID: "a162758d-5bc7-4bb8-949c-e32d2f33a380"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.293373 5012 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a162758d-5bc7-4bb8-949c-e32d2f33a380-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.293411 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82vp2\" (UniqueName: \"kubernetes.io/projected/a162758d-5bc7-4bb8-949c-e32d2f33a380-kube-api-access-82vp2\") on node \"crc\" DevicePath \"\"" Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.293425 5012 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a162758d-5bc7-4bb8-949c-e32d2f33a380-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.622974 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" event={"ID":"a162758d-5bc7-4bb8-949c-e32d2f33a380","Type":"ContainerDied","Data":"6f1bbee1a753485477679410e5d6880b9435760f5bdd367bcbd68fdfbc9f18e8"} Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.623031 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f1bbee1a753485477679410e5d6880b9435760f5bdd367bcbd68fdfbc9f18e8" Feb 19 06:45:03 crc kubenswrapper[5012]: I0219 06:45:03.623062 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524725-rr4lx" Feb 19 06:45:04 crc kubenswrapper[5012]: I0219 06:45:04.100016 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8"] Feb 19 06:45:04 crc kubenswrapper[5012]: I0219 06:45:04.111213 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524680-7g7p8"] Feb 19 06:45:04 crc kubenswrapper[5012]: I0219 06:45:04.718819 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e" path="/var/lib/kubelet/pods/f5bdc022-3d70-4d4d-8f03-f2cf8b295a7e/volumes" Feb 19 06:45:39 crc kubenswrapper[5012]: I0219 06:45:39.424444 5012 scope.go:117] "RemoveContainer" containerID="aab8c26b7c272ad359e2397dc4c5c133e04f23474846a5322b643a2a4fdad8bd" Feb 19 06:46:09 crc kubenswrapper[5012]: E0219 06:46:09.062025 5012 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.110:53294->38.102.83.110:36123: write tcp 38.102.83.110:53294->38.102.83.110:36123: write: broken pipe Feb 19 06:46:14 crc kubenswrapper[5012]: I0219 06:46:14.431030 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:46:14 crc kubenswrapper[5012]: I0219 06:46:14.431671 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:46:44 crc kubenswrapper[5012]: I0219 06:46:44.430415 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:46:44 crc kubenswrapper[5012]: I0219 06:46:44.430993 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:47:14 crc kubenswrapper[5012]: I0219 06:47:14.430849 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:47:14 crc kubenswrapper[5012]: I0219 06:47:14.431329 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:47:14 crc kubenswrapper[5012]: I0219 06:47:14.431376 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:47:14 crc kubenswrapper[5012]: I0219 06:47:14.432112 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:47:14 crc kubenswrapper[5012]: I0219 06:47:14.432161 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" gracePeriod=600 Feb 19 06:47:14 crc kubenswrapper[5012]: E0219 06:47:14.565987 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:47:15 crc kubenswrapper[5012]: I0219 06:47:15.179579 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" exitCode=0 Feb 19 06:47:15 crc kubenswrapper[5012]: I0219 06:47:15.179842 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3"} Feb 19 06:47:15 crc kubenswrapper[5012]: I0219 06:47:15.179877 5012 scope.go:117] "RemoveContainer" containerID="00e1065f77f9e7e865aaa5f4d131bac2bd7836a8c4264f89c263a864c8ce750f" Feb 19 06:47:15 crc kubenswrapper[5012]: I0219 06:47:15.180890 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:47:15 crc kubenswrapper[5012]: E0219 06:47:15.181435 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:47:28 crc kubenswrapper[5012]: I0219 06:47:28.703272 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:47:28 crc kubenswrapper[5012]: E0219 06:47:28.704279 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:47:40 crc kubenswrapper[5012]: I0219 06:47:40.702789 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:47:40 crc kubenswrapper[5012]: E0219 06:47:40.703420 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:47:51 crc kubenswrapper[5012]: I0219 06:47:51.703753 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:47:51 crc kubenswrapper[5012]: E0219 06:47:51.704731 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:48:02 crc kubenswrapper[5012]: I0219 06:48:02.704506 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:48:02 crc kubenswrapper[5012]: E0219 06:48:02.707171 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:48:17 crc kubenswrapper[5012]: I0219 06:48:17.704498 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:48:17 crc kubenswrapper[5012]: E0219 06:48:17.706021 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:48:28 crc kubenswrapper[5012]: I0219 06:48:28.703362 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:48:28 crc kubenswrapper[5012]: E0219 06:48:28.704407 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:48:41 crc kubenswrapper[5012]: I0219 06:48:41.702687 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:48:41 crc kubenswrapper[5012]: E0219 06:48:41.703585 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:48:55 crc kubenswrapper[5012]: I0219 06:48:55.703537 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:48:55 crc kubenswrapper[5012]: E0219 06:48:55.704808 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:49:08 crc kubenswrapper[5012]: I0219 06:49:08.703769 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:49:08 crc kubenswrapper[5012]: E0219 06:49:08.704688 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:49:22 crc kubenswrapper[5012]: I0219 06:49:22.703681 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:49:22 crc kubenswrapper[5012]: E0219 06:49:22.704802 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:49:37 crc kubenswrapper[5012]: I0219 06:49:37.704230 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:49:37 crc kubenswrapper[5012]: E0219 06:49:37.706376 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:49:52 crc kubenswrapper[5012]: I0219 06:49:52.703174 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:49:52 crc kubenswrapper[5012]: E0219 06:49:52.706622 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:50:05 crc kubenswrapper[5012]: I0219 06:50:05.703056 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:50:05 crc kubenswrapper[5012]: E0219 06:50:05.704639 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.168880 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7tqft"] Feb 19 06:50:15 crc kubenswrapper[5012]: E0219 06:50:15.170105 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a162758d-5bc7-4bb8-949c-e32d2f33a380" containerName="collect-profiles" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.170128 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="a162758d-5bc7-4bb8-949c-e32d2f33a380" containerName="collect-profiles" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.170581 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="a162758d-5bc7-4bb8-949c-e32d2f33a380" containerName="collect-profiles" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.172982 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.184937 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7tqft"] Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.235506 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnlw6\" (UniqueName: \"kubernetes.io/projected/91142b0c-3f47-468d-b976-121a1a8afb9a-kube-api-access-lnlw6\") pod \"community-operators-7tqft\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.235663 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-catalog-content\") pod \"community-operators-7tqft\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.235729 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-utilities\") pod \"community-operators-7tqft\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.337441 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnlw6\" (UniqueName: \"kubernetes.io/projected/91142b0c-3f47-468d-b976-121a1a8afb9a-kube-api-access-lnlw6\") pod \"community-operators-7tqft\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.337547 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-catalog-content\") pod \"community-operators-7tqft\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.337593 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-utilities\") pod \"community-operators-7tqft\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.338129 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-utilities\") pod \"community-operators-7tqft\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.338282 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-catalog-content\") pod \"community-operators-7tqft\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.357912 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnlw6\" (UniqueName: \"kubernetes.io/projected/91142b0c-3f47-468d-b976-121a1a8afb9a-kube-api-access-lnlw6\") pod \"community-operators-7tqft\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:15 crc kubenswrapper[5012]: I0219 06:50:15.525480 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:16 crc kubenswrapper[5012]: I0219 06:50:16.064024 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7tqft"] Feb 19 06:50:16 crc kubenswrapper[5012]: I0219 06:50:16.556468 5012 generic.go:334] "Generic (PLEG): container finished" podID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerID="580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90" exitCode=0 Feb 19 06:50:16 crc kubenswrapper[5012]: I0219 06:50:16.556587 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7tqft" event={"ID":"91142b0c-3f47-468d-b976-121a1a8afb9a","Type":"ContainerDied","Data":"580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90"} Feb 19 06:50:16 crc kubenswrapper[5012]: I0219 06:50:16.557293 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7tqft" event={"ID":"91142b0c-3f47-468d-b976-121a1a8afb9a","Type":"ContainerStarted","Data":"1ae9311f07ab91b8cc6e0a3d9e7f87d400993f9f8f9110238874445a3119b52d"} Feb 19 06:50:16 crc kubenswrapper[5012]: I0219 06:50:16.559557 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 06:50:17 crc kubenswrapper[5012]: I0219 06:50:17.572760 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7tqft" event={"ID":"91142b0c-3f47-468d-b976-121a1a8afb9a","Type":"ContainerStarted","Data":"c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e"} Feb 19 06:50:18 crc kubenswrapper[5012]: I0219 06:50:18.593665 5012 generic.go:334] "Generic (PLEG): container finished" podID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerID="c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e" exitCode=0 Feb 19 06:50:18 crc kubenswrapper[5012]: I0219 06:50:18.593750 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7tqft" event={"ID":"91142b0c-3f47-468d-b976-121a1a8afb9a","Type":"ContainerDied","Data":"c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e"} Feb 19 06:50:20 crc kubenswrapper[5012]: I0219 06:50:20.623316 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7tqft" event={"ID":"91142b0c-3f47-468d-b976-121a1a8afb9a","Type":"ContainerStarted","Data":"93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7"} Feb 19 06:50:20 crc kubenswrapper[5012]: I0219 06:50:20.664297 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7tqft" podStartSLOduration=3.242569656 podStartE2EDuration="5.664270749s" podCreationTimestamp="2026-02-19 06:50:15 +0000 UTC" firstStartedPulling="2026-02-19 06:50:16.559087692 +0000 UTC m=+5112.592410291" lastFinishedPulling="2026-02-19 06:50:18.980788785 +0000 UTC m=+5115.014111384" observedRunningTime="2026-02-19 06:50:20.645447155 +0000 UTC m=+5116.678769744" watchObservedRunningTime="2026-02-19 06:50:20.664270749 +0000 UTC m=+5116.697593358" Feb 19 06:50:20 crc kubenswrapper[5012]: I0219 06:50:20.703779 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:50:20 crc kubenswrapper[5012]: E0219 06:50:20.704084 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:50:25 crc kubenswrapper[5012]: I0219 06:50:25.525633 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:25 crc kubenswrapper[5012]: I0219 06:50:25.526126 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:25 crc kubenswrapper[5012]: I0219 06:50:25.591775 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:25 crc kubenswrapper[5012]: I0219 06:50:25.734044 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:25 crc kubenswrapper[5012]: I0219 06:50:25.834549 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7tqft"] Feb 19 06:50:27 crc kubenswrapper[5012]: I0219 06:50:27.697959 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7tqft" podUID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerName="registry-server" containerID="cri-o://93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7" gracePeriod=2 Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.271426 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.424773 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnlw6\" (UniqueName: \"kubernetes.io/projected/91142b0c-3f47-468d-b976-121a1a8afb9a-kube-api-access-lnlw6\") pod \"91142b0c-3f47-468d-b976-121a1a8afb9a\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.425004 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-catalog-content\") pod \"91142b0c-3f47-468d-b976-121a1a8afb9a\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.425049 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-utilities\") pod \"91142b0c-3f47-468d-b976-121a1a8afb9a\" (UID: \"91142b0c-3f47-468d-b976-121a1a8afb9a\") " Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.425889 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-utilities" (OuterVolumeSpecName: "utilities") pod "91142b0c-3f47-468d-b976-121a1a8afb9a" (UID: "91142b0c-3f47-468d-b976-121a1a8afb9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.434369 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91142b0c-3f47-468d-b976-121a1a8afb9a-kube-api-access-lnlw6" (OuterVolumeSpecName: "kube-api-access-lnlw6") pod "91142b0c-3f47-468d-b976-121a1a8afb9a" (UID: "91142b0c-3f47-468d-b976-121a1a8afb9a"). InnerVolumeSpecName "kube-api-access-lnlw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.472214 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91142b0c-3f47-468d-b976-121a1a8afb9a" (UID: "91142b0c-3f47-468d-b976-121a1a8afb9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.527148 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.527175 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91142b0c-3f47-468d-b976-121a1a8afb9a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.527187 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnlw6\" (UniqueName: \"kubernetes.io/projected/91142b0c-3f47-468d-b976-121a1a8afb9a-kube-api-access-lnlw6\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.716615 5012 generic.go:334] "Generic (PLEG): container finished" podID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerID="93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7" exitCode=0 Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.716808 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7tqft" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.739151 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7tqft" event={"ID":"91142b0c-3f47-468d-b976-121a1a8afb9a","Type":"ContainerDied","Data":"93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7"} Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.739195 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7tqft" event={"ID":"91142b0c-3f47-468d-b976-121a1a8afb9a","Type":"ContainerDied","Data":"1ae9311f07ab91b8cc6e0a3d9e7f87d400993f9f8f9110238874445a3119b52d"} Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.739211 5012 scope.go:117] "RemoveContainer" containerID="93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.785449 5012 scope.go:117] "RemoveContainer" containerID="c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.787009 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7tqft"] Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.799159 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7tqft"] Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.812067 5012 scope.go:117] "RemoveContainer" containerID="580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.852418 5012 scope.go:117] "RemoveContainer" containerID="93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7" Feb 19 06:50:28 crc kubenswrapper[5012]: E0219 06:50:28.852942 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7\": container with ID starting with 93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7 not found: ID does not exist" containerID="93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.853009 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7"} err="failed to get container status \"93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7\": rpc error: code = NotFound desc = could not find container \"93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7\": container with ID starting with 93dba8312e1729f5c1e74a1b362a13145b72b430c8dc1eb51f32c6c9ebfa69c7 not found: ID does not exist" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.853035 5012 scope.go:117] "RemoveContainer" containerID="c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e" Feb 19 06:50:28 crc kubenswrapper[5012]: E0219 06:50:28.853871 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e\": container with ID starting with c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e not found: ID does not exist" containerID="c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.853910 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e"} err="failed to get container status \"c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e\": rpc error: code = NotFound desc = could not find container \"c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e\": container with ID starting with c496bcf22369631907d41a1febd8b888764f7430be2d15f8d44991e78f142b5e not found: ID does not exist" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.853939 5012 scope.go:117] "RemoveContainer" containerID="580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90" Feb 19 06:50:28 crc kubenswrapper[5012]: E0219 06:50:28.854433 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90\": container with ID starting with 580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90 not found: ID does not exist" containerID="580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90" Feb 19 06:50:28 crc kubenswrapper[5012]: I0219 06:50:28.854735 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90"} err="failed to get container status \"580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90\": rpc error: code = NotFound desc = could not find container \"580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90\": container with ID starting with 580cdecfd551c5004fdf48b03a725c3301884e395537a8a76dbdfc2bc1a72b90 not found: ID does not exist" Feb 19 06:50:30 crc kubenswrapper[5012]: I0219 06:50:30.713806 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91142b0c-3f47-468d-b976-121a1a8afb9a" path="/var/lib/kubelet/pods/91142b0c-3f47-468d-b976-121a1a8afb9a/volumes" Feb 19 06:50:32 crc kubenswrapper[5012]: I0219 06:50:32.765749 5012 generic.go:334] "Generic (PLEG): container finished" podID="54eccb09-b3ec-45bc-8065-4c5eb9516257" containerID="45a71cb7a299afd86b43701046f8b7c089e907df4ed4d824464d2883ac4074ea" exitCode=0 Feb 19 06:50:32 crc kubenswrapper[5012]: I0219 06:50:32.765867 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"54eccb09-b3ec-45bc-8065-4c5eb9516257","Type":"ContainerDied","Data":"45a71cb7a299afd86b43701046f8b7c089e907df4ed4d824464d2883ac4074ea"} Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.231197 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.268181 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-workdir\") pod \"54eccb09-b3ec-45bc-8065-4c5eb9516257\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.268220 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-temporary\") pod \"54eccb09-b3ec-45bc-8065-4c5eb9516257\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.268249 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-config-data\") pod \"54eccb09-b3ec-45bc-8065-4c5eb9516257\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.268281 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8klk\" (UniqueName: \"kubernetes.io/projected/54eccb09-b3ec-45bc-8065-4c5eb9516257-kube-api-access-b8klk\") pod \"54eccb09-b3ec-45bc-8065-4c5eb9516257\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.268426 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ca-certs\") pod \"54eccb09-b3ec-45bc-8065-4c5eb9516257\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.268584 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ssh-key\") pod \"54eccb09-b3ec-45bc-8065-4c5eb9516257\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.268648 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"54eccb09-b3ec-45bc-8065-4c5eb9516257\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.268695 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config-secret\") pod \"54eccb09-b3ec-45bc-8065-4c5eb9516257\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.268764 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config\") pod \"54eccb09-b3ec-45bc-8065-4c5eb9516257\" (UID: \"54eccb09-b3ec-45bc-8065-4c5eb9516257\") " Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.268957 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "54eccb09-b3ec-45bc-8065-4c5eb9516257" (UID: "54eccb09-b3ec-45bc-8065-4c5eb9516257"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.269348 5012 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.269468 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-config-data" (OuterVolumeSpecName: "config-data") pod "54eccb09-b3ec-45bc-8065-4c5eb9516257" (UID: "54eccb09-b3ec-45bc-8065-4c5eb9516257"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.278579 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "54eccb09-b3ec-45bc-8065-4c5eb9516257" (UID: "54eccb09-b3ec-45bc-8065-4c5eb9516257"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.287496 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "54eccb09-b3ec-45bc-8065-4c5eb9516257" (UID: "54eccb09-b3ec-45bc-8065-4c5eb9516257"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.288359 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54eccb09-b3ec-45bc-8065-4c5eb9516257-kube-api-access-b8klk" (OuterVolumeSpecName: "kube-api-access-b8klk") pod "54eccb09-b3ec-45bc-8065-4c5eb9516257" (UID: "54eccb09-b3ec-45bc-8065-4c5eb9516257"). InnerVolumeSpecName "kube-api-access-b8klk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.314729 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "54eccb09-b3ec-45bc-8065-4c5eb9516257" (UID: "54eccb09-b3ec-45bc-8065-4c5eb9516257"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.322051 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "54eccb09-b3ec-45bc-8065-4c5eb9516257" (UID: "54eccb09-b3ec-45bc-8065-4c5eb9516257"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.334547 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "54eccb09-b3ec-45bc-8065-4c5eb9516257" (UID: "54eccb09-b3ec-45bc-8065-4c5eb9516257"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.360380 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "54eccb09-b3ec-45bc-8065-4c5eb9516257" (UID: "54eccb09-b3ec-45bc-8065-4c5eb9516257"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.371067 5012 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.371122 5012 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.371133 5012 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.371144 5012 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.371154 5012 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/54eccb09-b3ec-45bc-8065-4c5eb9516257-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.371163 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54eccb09-b3ec-45bc-8065-4c5eb9516257-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.371173 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8klk\" (UniqueName: \"kubernetes.io/projected/54eccb09-b3ec-45bc-8065-4c5eb9516257-kube-api-access-b8klk\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.371183 5012 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/54eccb09-b3ec-45bc-8065-4c5eb9516257-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.395931 5012 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.472848 5012 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.709962 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:50:34 crc kubenswrapper[5012]: E0219 06:50:34.710659 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.786224 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"54eccb09-b3ec-45bc-8065-4c5eb9516257","Type":"ContainerDied","Data":"4d40402dd6566caf396779f17c2dbfad2df685b1f64caf3b6b294fc60c0aaaea"} Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.786281 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d40402dd6566caf396779f17c2dbfad2df685b1f64caf3b6b294fc60c0aaaea" Feb 19 06:50:34 crc kubenswrapper[5012]: I0219 06:50:34.786594 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 06:50:42 crc kubenswrapper[5012]: I0219 06:50:42.876078 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 06:50:42 crc kubenswrapper[5012]: E0219 06:50:42.877162 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerName="registry-server" Feb 19 06:50:42 crc kubenswrapper[5012]: I0219 06:50:42.877180 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerName="registry-server" Feb 19 06:50:42 crc kubenswrapper[5012]: E0219 06:50:42.877205 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerName="extract-content" Feb 19 06:50:42 crc kubenswrapper[5012]: I0219 06:50:42.877212 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerName="extract-content" Feb 19 06:50:42 crc kubenswrapper[5012]: E0219 06:50:42.877239 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerName="extract-utilities" Feb 19 06:50:42 crc kubenswrapper[5012]: I0219 06:50:42.877248 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerName="extract-utilities" Feb 19 06:50:42 crc kubenswrapper[5012]: E0219 06:50:42.877272 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54eccb09-b3ec-45bc-8065-4c5eb9516257" containerName="tempest-tests-tempest-tests-runner" Feb 19 06:50:42 crc kubenswrapper[5012]: I0219 06:50:42.877280 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="54eccb09-b3ec-45bc-8065-4c5eb9516257" containerName="tempest-tests-tempest-tests-runner" Feb 19 06:50:42 crc kubenswrapper[5012]: I0219 06:50:42.877628 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="54eccb09-b3ec-45bc-8065-4c5eb9516257" containerName="tempest-tests-tempest-tests-runner" Feb 19 06:50:42 crc kubenswrapper[5012]: I0219 06:50:42.877646 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="91142b0c-3f47-468d-b976-121a1a8afb9a" containerName="registry-server" Feb 19 06:50:42 crc kubenswrapper[5012]: I0219 06:50:42.878492 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 06:50:42 crc kubenswrapper[5012]: I0219 06:50:42.882376 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-s2ths" Feb 19 06:50:42 crc kubenswrapper[5012]: I0219 06:50:42.888203 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 06:50:43 crc kubenswrapper[5012]: I0219 06:50:43.011382 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfw56\" (UniqueName: \"kubernetes.io/projected/78c125a8-bf69-4524-9b70-be9fe9f313e7-kube-api-access-rfw56\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78c125a8-bf69-4524-9b70-be9fe9f313e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 06:50:43 crc kubenswrapper[5012]: I0219 06:50:43.012021 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78c125a8-bf69-4524-9b70-be9fe9f313e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 06:50:43 crc kubenswrapper[5012]: I0219 06:50:43.114412 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78c125a8-bf69-4524-9b70-be9fe9f313e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 06:50:43 crc kubenswrapper[5012]: I0219 06:50:43.114591 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfw56\" (UniqueName: \"kubernetes.io/projected/78c125a8-bf69-4524-9b70-be9fe9f313e7-kube-api-access-rfw56\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78c125a8-bf69-4524-9b70-be9fe9f313e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 06:50:43 crc kubenswrapper[5012]: I0219 06:50:43.115016 5012 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78c125a8-bf69-4524-9b70-be9fe9f313e7\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 06:50:43 crc kubenswrapper[5012]: I0219 06:50:43.148471 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfw56\" (UniqueName: \"kubernetes.io/projected/78c125a8-bf69-4524-9b70-be9fe9f313e7-kube-api-access-rfw56\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78c125a8-bf69-4524-9b70-be9fe9f313e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 06:50:43 crc kubenswrapper[5012]: I0219 06:50:43.161962 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"78c125a8-bf69-4524-9b70-be9fe9f313e7\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 06:50:43 crc kubenswrapper[5012]: I0219 06:50:43.228128 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 06:50:43 crc kubenswrapper[5012]: W0219 06:50:43.740134 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78c125a8_bf69_4524_9b70_be9fe9f313e7.slice/crio-e8c180c988dc2a116669d5fc6c228b239b87284096937552d3fce9967c06c195 WatchSource:0}: Error finding container e8c180c988dc2a116669d5fc6c228b239b87284096937552d3fce9967c06c195: Status 404 returned error can't find the container with id e8c180c988dc2a116669d5fc6c228b239b87284096937552d3fce9967c06c195 Feb 19 06:50:43 crc kubenswrapper[5012]: I0219 06:50:43.745191 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 06:50:43 crc kubenswrapper[5012]: I0219 06:50:43.944392 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"78c125a8-bf69-4524-9b70-be9fe9f313e7","Type":"ContainerStarted","Data":"e8c180c988dc2a116669d5fc6c228b239b87284096937552d3fce9967c06c195"} Feb 19 06:50:44 crc kubenswrapper[5012]: I0219 06:50:44.957981 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"78c125a8-bf69-4524-9b70-be9fe9f313e7","Type":"ContainerStarted","Data":"6e96d1e4f9b7f9a554bfb367924f09fe9252e50f14f75ecf0e1e186fb76f5965"} Feb 19 06:50:44 crc kubenswrapper[5012]: I0219 06:50:44.977832 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.158438272 podStartE2EDuration="2.977800788s" podCreationTimestamp="2026-02-19 06:50:42 +0000 UTC" firstStartedPulling="2026-02-19 06:50:43.74325462 +0000 UTC m=+5139.776577199" lastFinishedPulling="2026-02-19 06:50:44.562617146 +0000 UTC m=+5140.595939715" observedRunningTime="2026-02-19 06:50:44.971728051 +0000 UTC m=+5141.005050700" watchObservedRunningTime="2026-02-19 06:50:44.977800788 +0000 UTC m=+5141.011123397" Feb 19 06:50:45 crc kubenswrapper[5012]: I0219 06:50:45.703503 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:50:45 crc kubenswrapper[5012]: E0219 06:50:45.704262 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.398481 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hz76m"] Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.402102 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.413456 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hz76m"] Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.488897 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-utilities\") pod \"certified-operators-hz76m\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.592003 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-catalog-content\") pod \"certified-operators-hz76m\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.592203 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-utilities\") pod \"certified-operators-hz76m\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.592411 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4xtk\" (UniqueName: \"kubernetes.io/projected/d5865e83-a688-4445-8b42-3ebaf9f9c74e-kube-api-access-h4xtk\") pod \"certified-operators-hz76m\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.592874 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-utilities\") pod \"certified-operators-hz76m\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.693707 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4xtk\" (UniqueName: \"kubernetes.io/projected/d5865e83-a688-4445-8b42-3ebaf9f9c74e-kube-api-access-h4xtk\") pod \"certified-operators-hz76m\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.693862 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-catalog-content\") pod \"certified-operators-hz76m\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.694379 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-catalog-content\") pod \"certified-operators-hz76m\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.717396 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4xtk\" (UniqueName: \"kubernetes.io/projected/d5865e83-a688-4445-8b42-3ebaf9f9c74e-kube-api-access-h4xtk\") pod \"certified-operators-hz76m\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:49 crc kubenswrapper[5012]: I0219 06:50:49.723170 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:50 crc kubenswrapper[5012]: I0219 06:50:50.239902 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hz76m"] Feb 19 06:50:51 crc kubenswrapper[5012]: I0219 06:50:51.036809 5012 generic.go:334] "Generic (PLEG): container finished" podID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerID="6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec" exitCode=0 Feb 19 06:50:51 crc kubenswrapper[5012]: I0219 06:50:51.036866 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hz76m" event={"ID":"d5865e83-a688-4445-8b42-3ebaf9f9c74e","Type":"ContainerDied","Data":"6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec"} Feb 19 06:50:51 crc kubenswrapper[5012]: I0219 06:50:51.037391 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hz76m" event={"ID":"d5865e83-a688-4445-8b42-3ebaf9f9c74e","Type":"ContainerStarted","Data":"18947d88a25745afff68df1e41694c114c48442134135268e3638f0b3c1c1e62"} Feb 19 06:50:52 crc kubenswrapper[5012]: I0219 06:50:52.048738 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hz76m" event={"ID":"d5865e83-a688-4445-8b42-3ebaf9f9c74e","Type":"ContainerStarted","Data":"5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39"} Feb 19 06:50:53 crc kubenswrapper[5012]: I0219 06:50:53.066736 5012 generic.go:334] "Generic (PLEG): container finished" podID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerID="5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39" exitCode=0 Feb 19 06:50:53 crc kubenswrapper[5012]: I0219 06:50:53.066810 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hz76m" event={"ID":"d5865e83-a688-4445-8b42-3ebaf9f9c74e","Type":"ContainerDied","Data":"5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39"} Feb 19 06:50:54 crc kubenswrapper[5012]: I0219 06:50:54.079859 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hz76m" event={"ID":"d5865e83-a688-4445-8b42-3ebaf9f9c74e","Type":"ContainerStarted","Data":"8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83"} Feb 19 06:50:54 crc kubenswrapper[5012]: I0219 06:50:54.108364 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hz76m" podStartSLOduration=2.635609689 podStartE2EDuration="5.108338033s" podCreationTimestamp="2026-02-19 06:50:49 +0000 UTC" firstStartedPulling="2026-02-19 06:50:51.039968432 +0000 UTC m=+5147.073291001" lastFinishedPulling="2026-02-19 06:50:53.512696736 +0000 UTC m=+5149.546019345" observedRunningTime="2026-02-19 06:50:54.099768816 +0000 UTC m=+5150.133091415" watchObservedRunningTime="2026-02-19 06:50:54.108338033 +0000 UTC m=+5150.141660642" Feb 19 06:50:59 crc kubenswrapper[5012]: I0219 06:50:59.723781 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:59 crc kubenswrapper[5012]: I0219 06:50:59.724272 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:50:59 crc kubenswrapper[5012]: I0219 06:50:59.780475 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:51:00 crc kubenswrapper[5012]: I0219 06:51:00.234655 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:51:00 crc kubenswrapper[5012]: I0219 06:51:00.315559 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hz76m"] Feb 19 06:51:00 crc kubenswrapper[5012]: I0219 06:51:00.703489 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:51:00 crc kubenswrapper[5012]: E0219 06:51:00.704060 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.173997 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hz76m" podUID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerName="registry-server" containerID="cri-o://8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83" gracePeriod=2 Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.695982 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.802398 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-catalog-content\") pod \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.802858 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4xtk\" (UniqueName: \"kubernetes.io/projected/d5865e83-a688-4445-8b42-3ebaf9f9c74e-kube-api-access-h4xtk\") pod \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.803028 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-utilities\") pod \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\" (UID: \"d5865e83-a688-4445-8b42-3ebaf9f9c74e\") " Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.803691 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-utilities" (OuterVolumeSpecName: "utilities") pod "d5865e83-a688-4445-8b42-3ebaf9f9c74e" (UID: "d5865e83-a688-4445-8b42-3ebaf9f9c74e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.804802 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.811803 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5865e83-a688-4445-8b42-3ebaf9f9c74e-kube-api-access-h4xtk" (OuterVolumeSpecName: "kube-api-access-h4xtk") pod "d5865e83-a688-4445-8b42-3ebaf9f9c74e" (UID: "d5865e83-a688-4445-8b42-3ebaf9f9c74e"). InnerVolumeSpecName "kube-api-access-h4xtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.855655 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5865e83-a688-4445-8b42-3ebaf9f9c74e" (UID: "d5865e83-a688-4445-8b42-3ebaf9f9c74e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.907123 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5865e83-a688-4445-8b42-3ebaf9f9c74e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:51:02 crc kubenswrapper[5012]: I0219 06:51:02.907166 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4xtk\" (UniqueName: \"kubernetes.io/projected/d5865e83-a688-4445-8b42-3ebaf9f9c74e-kube-api-access-h4xtk\") on node \"crc\" DevicePath \"\"" Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.188640 5012 generic.go:334] "Generic (PLEG): container finished" podID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerID="8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83" exitCode=0 Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.188715 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hz76m" Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.188743 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hz76m" event={"ID":"d5865e83-a688-4445-8b42-3ebaf9f9c74e","Type":"ContainerDied","Data":"8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83"} Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.189151 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hz76m" event={"ID":"d5865e83-a688-4445-8b42-3ebaf9f9c74e","Type":"ContainerDied","Data":"18947d88a25745afff68df1e41694c114c48442134135268e3638f0b3c1c1e62"} Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.189181 5012 scope.go:117] "RemoveContainer" containerID="8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83" Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.229089 5012 scope.go:117] "RemoveContainer" containerID="5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39" Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.258881 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hz76m"] Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.270659 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hz76m"] Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.276129 5012 scope.go:117] "RemoveContainer" containerID="6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec" Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.323268 5012 scope.go:117] "RemoveContainer" containerID="8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83" Feb 19 06:51:03 crc kubenswrapper[5012]: E0219 06:51:03.323900 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83\": container with ID starting with 8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83 not found: ID does not exist" containerID="8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83" Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.323949 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83"} err="failed to get container status \"8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83\": rpc error: code = NotFound desc = could not find container \"8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83\": container with ID starting with 8ab831775ae9850ec9326512af4ed9a231ab5760d811c4f245aae3f828b73a83 not found: ID does not exist" Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.323977 5012 scope.go:117] "RemoveContainer" containerID="5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39" Feb 19 06:51:03 crc kubenswrapper[5012]: E0219 06:51:03.324347 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39\": container with ID starting with 5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39 not found: ID does not exist" containerID="5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39" Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.324373 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39"} err="failed to get container status \"5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39\": rpc error: code = NotFound desc = could not find container \"5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39\": container with ID starting with 5e8d3c4e0662b0ff8cdfd573a7e1c5c4a7e9f082ba727a5ce7cd41fcb2c62c39 not found: ID does not exist" Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.324385 5012 scope.go:117] "RemoveContainer" containerID="6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec" Feb 19 06:51:03 crc kubenswrapper[5012]: E0219 06:51:03.324765 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec\": container with ID starting with 6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec not found: ID does not exist" containerID="6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec" Feb 19 06:51:03 crc kubenswrapper[5012]: I0219 06:51:03.324795 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec"} err="failed to get container status \"6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec\": rpc error: code = NotFound desc = could not find container \"6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec\": container with ID starting with 6d365cf0eea8d47fe94a91240261923b640f88fe256d0a73b96d66c7eaff87ec not found: ID does not exist" Feb 19 06:51:04 crc kubenswrapper[5012]: I0219 06:51:04.738033 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" path="/var/lib/kubelet/pods/d5865e83-a688-4445-8b42-3ebaf9f9c74e/volumes" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.356353 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hncx9/must-gather-gs9fs"] Feb 19 06:51:07 crc kubenswrapper[5012]: E0219 06:51:07.357248 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerName="extract-utilities" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.357265 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerName="extract-utilities" Feb 19 06:51:07 crc kubenswrapper[5012]: E0219 06:51:07.357289 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerName="registry-server" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.357298 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerName="registry-server" Feb 19 06:51:07 crc kubenswrapper[5012]: E0219 06:51:07.357337 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerName="extract-content" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.357345 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerName="extract-content" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.357655 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5865e83-a688-4445-8b42-3ebaf9f9c74e" containerName="registry-server" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.359167 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/must-gather-gs9fs" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.361729 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hncx9"/"kube-root-ca.crt" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.361832 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-hncx9"/"default-dockercfg-wpbmp" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.362736 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hncx9"/"openshift-service-ca.crt" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.366025 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hncx9/must-gather-gs9fs"] Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.413821 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzz4l\" (UniqueName: \"kubernetes.io/projected/5afd9390-aa19-4b48-b659-089e59ea82e5-kube-api-access-vzz4l\") pod \"must-gather-gs9fs\" (UID: \"5afd9390-aa19-4b48-b659-089e59ea82e5\") " pod="openshift-must-gather-hncx9/must-gather-gs9fs" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.413907 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5afd9390-aa19-4b48-b659-089e59ea82e5-must-gather-output\") pod \"must-gather-gs9fs\" (UID: \"5afd9390-aa19-4b48-b659-089e59ea82e5\") " pod="openshift-must-gather-hncx9/must-gather-gs9fs" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.526490 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzz4l\" (UniqueName: \"kubernetes.io/projected/5afd9390-aa19-4b48-b659-089e59ea82e5-kube-api-access-vzz4l\") pod \"must-gather-gs9fs\" (UID: \"5afd9390-aa19-4b48-b659-089e59ea82e5\") " pod="openshift-must-gather-hncx9/must-gather-gs9fs" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.526609 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5afd9390-aa19-4b48-b659-089e59ea82e5-must-gather-output\") pod \"must-gather-gs9fs\" (UID: \"5afd9390-aa19-4b48-b659-089e59ea82e5\") " pod="openshift-must-gather-hncx9/must-gather-gs9fs" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.527111 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5afd9390-aa19-4b48-b659-089e59ea82e5-must-gather-output\") pod \"must-gather-gs9fs\" (UID: \"5afd9390-aa19-4b48-b659-089e59ea82e5\") " pod="openshift-must-gather-hncx9/must-gather-gs9fs" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.559629 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzz4l\" (UniqueName: \"kubernetes.io/projected/5afd9390-aa19-4b48-b659-089e59ea82e5-kube-api-access-vzz4l\") pod \"must-gather-gs9fs\" (UID: \"5afd9390-aa19-4b48-b659-089e59ea82e5\") " pod="openshift-must-gather-hncx9/must-gather-gs9fs" Feb 19 06:51:07 crc kubenswrapper[5012]: I0219 06:51:07.686691 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/must-gather-gs9fs" Feb 19 06:51:08 crc kubenswrapper[5012]: I0219 06:51:08.232363 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hncx9/must-gather-gs9fs"] Feb 19 06:51:08 crc kubenswrapper[5012]: I0219 06:51:08.252176 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/must-gather-gs9fs" event={"ID":"5afd9390-aa19-4b48-b659-089e59ea82e5","Type":"ContainerStarted","Data":"bc49e8ad6e545cc5030ac5e432a623684663b16c84d6e5b7cede6ad7d29cfea6"} Feb 19 06:51:14 crc kubenswrapper[5012]: I0219 06:51:14.714993 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:51:14 crc kubenswrapper[5012]: E0219 06:51:14.715721 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:51:15 crc kubenswrapper[5012]: I0219 06:51:15.326228 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/must-gather-gs9fs" event={"ID":"5afd9390-aa19-4b48-b659-089e59ea82e5","Type":"ContainerStarted","Data":"1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828"} Feb 19 06:51:16 crc kubenswrapper[5012]: I0219 06:51:16.341371 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/must-gather-gs9fs" event={"ID":"5afd9390-aa19-4b48-b659-089e59ea82e5","Type":"ContainerStarted","Data":"7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48"} Feb 19 06:51:16 crc kubenswrapper[5012]: I0219 06:51:16.362800 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hncx9/must-gather-gs9fs" podStartSLOduration=2.716280245 podStartE2EDuration="9.362780352s" podCreationTimestamp="2026-02-19 06:51:07 +0000 UTC" firstStartedPulling="2026-02-19 06:51:08.225052749 +0000 UTC m=+5164.258375318" lastFinishedPulling="2026-02-19 06:51:14.871552855 +0000 UTC m=+5170.904875425" observedRunningTime="2026-02-19 06:51:16.35979588 +0000 UTC m=+5172.393118469" watchObservedRunningTime="2026-02-19 06:51:16.362780352 +0000 UTC m=+5172.396102921" Feb 19 06:51:20 crc kubenswrapper[5012]: I0219 06:51:20.076417 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hncx9/crc-debug-57vjb"] Feb 19 06:51:20 crc kubenswrapper[5012]: I0219 06:51:20.080367 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-57vjb" Feb 19 06:51:20 crc kubenswrapper[5012]: I0219 06:51:20.135378 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-host\") pod \"crc-debug-57vjb\" (UID: \"809fe06a-5a2d-4ac8-90d0-5a2569f3e116\") " pod="openshift-must-gather-hncx9/crc-debug-57vjb" Feb 19 06:51:20 crc kubenswrapper[5012]: I0219 06:51:20.135697 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjgqb\" (UniqueName: \"kubernetes.io/projected/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-kube-api-access-sjgqb\") pod \"crc-debug-57vjb\" (UID: \"809fe06a-5a2d-4ac8-90d0-5a2569f3e116\") " pod="openshift-must-gather-hncx9/crc-debug-57vjb" Feb 19 06:51:20 crc kubenswrapper[5012]: I0219 06:51:20.237960 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-host\") pod \"crc-debug-57vjb\" (UID: \"809fe06a-5a2d-4ac8-90d0-5a2569f3e116\") " pod="openshift-must-gather-hncx9/crc-debug-57vjb" Feb 19 06:51:20 crc kubenswrapper[5012]: I0219 06:51:20.238070 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjgqb\" (UniqueName: \"kubernetes.io/projected/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-kube-api-access-sjgqb\") pod \"crc-debug-57vjb\" (UID: \"809fe06a-5a2d-4ac8-90d0-5a2569f3e116\") " pod="openshift-must-gather-hncx9/crc-debug-57vjb" Feb 19 06:51:20 crc kubenswrapper[5012]: I0219 06:51:20.238152 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-host\") pod \"crc-debug-57vjb\" (UID: \"809fe06a-5a2d-4ac8-90d0-5a2569f3e116\") " pod="openshift-must-gather-hncx9/crc-debug-57vjb" Feb 19 06:51:20 crc kubenswrapper[5012]: I0219 06:51:20.264485 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjgqb\" (UniqueName: \"kubernetes.io/projected/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-kube-api-access-sjgqb\") pod \"crc-debug-57vjb\" (UID: \"809fe06a-5a2d-4ac8-90d0-5a2569f3e116\") " pod="openshift-must-gather-hncx9/crc-debug-57vjb" Feb 19 06:51:20 crc kubenswrapper[5012]: I0219 06:51:20.404926 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-57vjb" Feb 19 06:51:20 crc kubenswrapper[5012]: W0219 06:51:20.443974 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod809fe06a_5a2d_4ac8_90d0_5a2569f3e116.slice/crio-4610aaf767b371d7342611a4d17e5add337a68f49f5566ada2aafea0d8b89112 WatchSource:0}: Error finding container 4610aaf767b371d7342611a4d17e5add337a68f49f5566ada2aafea0d8b89112: Status 404 returned error can't find the container with id 4610aaf767b371d7342611a4d17e5add337a68f49f5566ada2aafea0d8b89112 Feb 19 06:51:21 crc kubenswrapper[5012]: I0219 06:51:21.388545 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/crc-debug-57vjb" event={"ID":"809fe06a-5a2d-4ac8-90d0-5a2569f3e116","Type":"ContainerStarted","Data":"4610aaf767b371d7342611a4d17e5add337a68f49f5566ada2aafea0d8b89112"} Feb 19 06:51:29 crc kubenswrapper[5012]: I0219 06:51:29.703560 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:51:29 crc kubenswrapper[5012]: E0219 06:51:29.704346 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:51:32 crc kubenswrapper[5012]: I0219 06:51:32.485273 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/crc-debug-57vjb" event={"ID":"809fe06a-5a2d-4ac8-90d0-5a2569f3e116","Type":"ContainerStarted","Data":"eee8e3c869cd66a7f0fdd02a2d1a9b68c170e21d37668f164d794773d0198ed5"} Feb 19 06:51:32 crc kubenswrapper[5012]: I0219 06:51:32.509353 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hncx9/crc-debug-57vjb" podStartSLOduration=1.628782629 podStartE2EDuration="12.509331031s" podCreationTimestamp="2026-02-19 06:51:20 +0000 UTC" firstStartedPulling="2026-02-19 06:51:20.446037146 +0000 UTC m=+5176.479359725" lastFinishedPulling="2026-02-19 06:51:31.326585558 +0000 UTC m=+5187.359908127" observedRunningTime="2026-02-19 06:51:32.498332676 +0000 UTC m=+5188.531655245" watchObservedRunningTime="2026-02-19 06:51:32.509331031 +0000 UTC m=+5188.542653600" Feb 19 06:51:43 crc kubenswrapper[5012]: I0219 06:51:43.703073 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:51:43 crc kubenswrapper[5012]: E0219 06:51:43.704732 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.093170 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-drxrq"] Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.096761 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.111017 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-drxrq"] Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.168661 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-catalog-content\") pod \"redhat-operators-drxrq\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.168771 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84v6t\" (UniqueName: \"kubernetes.io/projected/296d5f6b-220d-4eda-96e4-c405190f28dc-kube-api-access-84v6t\") pod \"redhat-operators-drxrq\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.168805 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-utilities\") pod \"redhat-operators-drxrq\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.270876 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-catalog-content\") pod \"redhat-operators-drxrq\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.270989 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84v6t\" (UniqueName: \"kubernetes.io/projected/296d5f6b-220d-4eda-96e4-c405190f28dc-kube-api-access-84v6t\") pod \"redhat-operators-drxrq\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.271014 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-utilities\") pod \"redhat-operators-drxrq\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.271383 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-catalog-content\") pod \"redhat-operators-drxrq\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.271537 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-utilities\") pod \"redhat-operators-drxrq\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.293764 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84v6t\" (UniqueName: \"kubernetes.io/projected/296d5f6b-220d-4eda-96e4-c405190f28dc-kube-api-access-84v6t\") pod \"redhat-operators-drxrq\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:44 crc kubenswrapper[5012]: I0219 06:51:44.428941 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:45 crc kubenswrapper[5012]: I0219 06:51:45.015355 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-drxrq"] Feb 19 06:51:45 crc kubenswrapper[5012]: I0219 06:51:45.606107 5012 generic.go:334] "Generic (PLEG): container finished" podID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerID="07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1" exitCode=0 Feb 19 06:51:45 crc kubenswrapper[5012]: I0219 06:51:45.606435 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drxrq" event={"ID":"296d5f6b-220d-4eda-96e4-c405190f28dc","Type":"ContainerDied","Data":"07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1"} Feb 19 06:51:45 crc kubenswrapper[5012]: I0219 06:51:45.607145 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drxrq" event={"ID":"296d5f6b-220d-4eda-96e4-c405190f28dc","Type":"ContainerStarted","Data":"6de2c1173cadccb7b874d72d1d3862009696c41fda3c7af45ebce6e2f0fa3d9c"} Feb 19 06:51:46 crc kubenswrapper[5012]: I0219 06:51:46.621852 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drxrq" event={"ID":"296d5f6b-220d-4eda-96e4-c405190f28dc","Type":"ContainerStarted","Data":"ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec"} Feb 19 06:51:48 crc kubenswrapper[5012]: I0219 06:51:48.643027 5012 generic.go:334] "Generic (PLEG): container finished" podID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerID="ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec" exitCode=0 Feb 19 06:51:48 crc kubenswrapper[5012]: I0219 06:51:48.643117 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drxrq" event={"ID":"296d5f6b-220d-4eda-96e4-c405190f28dc","Type":"ContainerDied","Data":"ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec"} Feb 19 06:51:50 crc kubenswrapper[5012]: I0219 06:51:50.666083 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drxrq" event={"ID":"296d5f6b-220d-4eda-96e4-c405190f28dc","Type":"ContainerStarted","Data":"0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c"} Feb 19 06:51:50 crc kubenswrapper[5012]: I0219 06:51:50.696744 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-drxrq" podStartSLOduration=2.433438421 podStartE2EDuration="6.696724122s" podCreationTimestamp="2026-02-19 06:51:44 +0000 UTC" firstStartedPulling="2026-02-19 06:51:45.609073364 +0000 UTC m=+5201.642395933" lastFinishedPulling="2026-02-19 06:51:49.872359065 +0000 UTC m=+5205.905681634" observedRunningTime="2026-02-19 06:51:50.691338662 +0000 UTC m=+5206.724661231" watchObservedRunningTime="2026-02-19 06:51:50.696724122 +0000 UTC m=+5206.730046691" Feb 19 06:51:54 crc kubenswrapper[5012]: I0219 06:51:54.429844 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:54 crc kubenswrapper[5012]: I0219 06:51:54.430390 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:51:55 crc kubenswrapper[5012]: I0219 06:51:55.481353 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-drxrq" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerName="registry-server" probeResult="failure" output=< Feb 19 06:51:55 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 06:51:55 crc kubenswrapper[5012]: > Feb 19 06:51:55 crc kubenswrapper[5012]: I0219 06:51:55.703072 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:51:55 crc kubenswrapper[5012]: E0219 06:51:55.703421 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:52:03 crc kubenswrapper[5012]: I0219 06:52:03.864611 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bgkfc"] Feb 19 06:52:03 crc kubenswrapper[5012]: I0219 06:52:03.867085 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:03 crc kubenswrapper[5012]: I0219 06:52:03.877571 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgkfc"] Feb 19 06:52:03 crc kubenswrapper[5012]: I0219 06:52:03.933714 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-catalog-content\") pod \"redhat-marketplace-bgkfc\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:03 crc kubenswrapper[5012]: I0219 06:52:03.933871 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfjbs\" (UniqueName: \"kubernetes.io/projected/b884b761-ae1b-4cce-b4fb-478f3c847090-kube-api-access-gfjbs\") pod \"redhat-marketplace-bgkfc\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:03 crc kubenswrapper[5012]: I0219 06:52:03.933896 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-utilities\") pod \"redhat-marketplace-bgkfc\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:04 crc kubenswrapper[5012]: I0219 06:52:04.036381 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-catalog-content\") pod \"redhat-marketplace-bgkfc\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:04 crc kubenswrapper[5012]: I0219 06:52:04.036515 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfjbs\" (UniqueName: \"kubernetes.io/projected/b884b761-ae1b-4cce-b4fb-478f3c847090-kube-api-access-gfjbs\") pod \"redhat-marketplace-bgkfc\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:04 crc kubenswrapper[5012]: I0219 06:52:04.036545 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-utilities\") pod \"redhat-marketplace-bgkfc\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:04 crc kubenswrapper[5012]: I0219 06:52:04.036952 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-catalog-content\") pod \"redhat-marketplace-bgkfc\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:04 crc kubenswrapper[5012]: I0219 06:52:04.037069 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-utilities\") pod \"redhat-marketplace-bgkfc\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:04 crc kubenswrapper[5012]: I0219 06:52:04.069528 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfjbs\" (UniqueName: \"kubernetes.io/projected/b884b761-ae1b-4cce-b4fb-478f3c847090-kube-api-access-gfjbs\") pod \"redhat-marketplace-bgkfc\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:04 crc kubenswrapper[5012]: I0219 06:52:04.187112 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:04 crc kubenswrapper[5012]: I0219 06:52:04.740464 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgkfc"] Feb 19 06:52:04 crc kubenswrapper[5012]: W0219 06:52:04.746090 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb884b761_ae1b_4cce_b4fb_478f3c847090.slice/crio-38010c4653b9ffc7e2858504be509a2397bd5c2505954ddda11f73e6e6a61c70 WatchSource:0}: Error finding container 38010c4653b9ffc7e2858504be509a2397bd5c2505954ddda11f73e6e6a61c70: Status 404 returned error can't find the container with id 38010c4653b9ffc7e2858504be509a2397bd5c2505954ddda11f73e6e6a61c70 Feb 19 06:52:04 crc kubenswrapper[5012]: I0219 06:52:04.814648 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgkfc" event={"ID":"b884b761-ae1b-4cce-b4fb-478f3c847090","Type":"ContainerStarted","Data":"38010c4653b9ffc7e2858504be509a2397bd5c2505954ddda11f73e6e6a61c70"} Feb 19 06:52:05 crc kubenswrapper[5012]: I0219 06:52:05.500129 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-drxrq" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerName="registry-server" probeResult="failure" output=< Feb 19 06:52:05 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 06:52:05 crc kubenswrapper[5012]: > Feb 19 06:52:05 crc kubenswrapper[5012]: I0219 06:52:05.825852 5012 generic.go:334] "Generic (PLEG): container finished" podID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerID="f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362" exitCode=0 Feb 19 06:52:05 crc kubenswrapper[5012]: I0219 06:52:05.825902 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgkfc" event={"ID":"b884b761-ae1b-4cce-b4fb-478f3c847090","Type":"ContainerDied","Data":"f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362"} Feb 19 06:52:06 crc kubenswrapper[5012]: I0219 06:52:06.709026 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:52:06 crc kubenswrapper[5012]: E0219 06:52:06.709857 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:52:06 crc kubenswrapper[5012]: I0219 06:52:06.843015 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgkfc" event={"ID":"b884b761-ae1b-4cce-b4fb-478f3c847090","Type":"ContainerStarted","Data":"00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1"} Feb 19 06:52:07 crc kubenswrapper[5012]: I0219 06:52:07.858895 5012 generic.go:334] "Generic (PLEG): container finished" podID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerID="00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1" exitCode=0 Feb 19 06:52:07 crc kubenswrapper[5012]: I0219 06:52:07.858965 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgkfc" event={"ID":"b884b761-ae1b-4cce-b4fb-478f3c847090","Type":"ContainerDied","Data":"00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1"} Feb 19 06:52:08 crc kubenswrapper[5012]: I0219 06:52:08.882122 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgkfc" event={"ID":"b884b761-ae1b-4cce-b4fb-478f3c847090","Type":"ContainerStarted","Data":"b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d"} Feb 19 06:52:08 crc kubenswrapper[5012]: I0219 06:52:08.909726 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bgkfc" podStartSLOduration=3.439005719 podStartE2EDuration="5.90970394s" podCreationTimestamp="2026-02-19 06:52:03 +0000 UTC" firstStartedPulling="2026-02-19 06:52:05.828295221 +0000 UTC m=+5221.861617790" lastFinishedPulling="2026-02-19 06:52:08.298993442 +0000 UTC m=+5224.332316011" observedRunningTime="2026-02-19 06:52:08.902908016 +0000 UTC m=+5224.936230585" watchObservedRunningTime="2026-02-19 06:52:08.90970394 +0000 UTC m=+5224.943026519" Feb 19 06:52:14 crc kubenswrapper[5012]: I0219 06:52:14.188217 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:14 crc kubenswrapper[5012]: I0219 06:52:14.190037 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:14 crc kubenswrapper[5012]: I0219 06:52:14.243654 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:14 crc kubenswrapper[5012]: I0219 06:52:14.476122 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:52:14 crc kubenswrapper[5012]: I0219 06:52:14.572542 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:52:15 crc kubenswrapper[5012]: I0219 06:52:14.998475 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:15 crc kubenswrapper[5012]: I0219 06:52:15.494281 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-drxrq"] Feb 19 06:52:15 crc kubenswrapper[5012]: I0219 06:52:15.947380 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-drxrq" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerName="registry-server" containerID="cri-o://0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c" gracePeriod=2 Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.390655 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.440894 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-catalog-content\") pod \"296d5f6b-220d-4eda-96e4-c405190f28dc\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.440997 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84v6t\" (UniqueName: \"kubernetes.io/projected/296d5f6b-220d-4eda-96e4-c405190f28dc-kube-api-access-84v6t\") pod \"296d5f6b-220d-4eda-96e4-c405190f28dc\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.441032 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-utilities\") pod \"296d5f6b-220d-4eda-96e4-c405190f28dc\" (UID: \"296d5f6b-220d-4eda-96e4-c405190f28dc\") " Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.442201 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-utilities" (OuterVolumeSpecName: "utilities") pod "296d5f6b-220d-4eda-96e4-c405190f28dc" (UID: "296d5f6b-220d-4eda-96e4-c405190f28dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.463510 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/296d5f6b-220d-4eda-96e4-c405190f28dc-kube-api-access-84v6t" (OuterVolumeSpecName: "kube-api-access-84v6t") pod "296d5f6b-220d-4eda-96e4-c405190f28dc" (UID: "296d5f6b-220d-4eda-96e4-c405190f28dc"). InnerVolumeSpecName "kube-api-access-84v6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.543554 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84v6t\" (UniqueName: \"kubernetes.io/projected/296d5f6b-220d-4eda-96e4-c405190f28dc-kube-api-access-84v6t\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.543583 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.580389 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "296d5f6b-220d-4eda-96e4-c405190f28dc" (UID: "296d5f6b-220d-4eda-96e4-c405190f28dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.645412 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/296d5f6b-220d-4eda-96e4-c405190f28dc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.959104 5012 generic.go:334] "Generic (PLEG): container finished" podID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerID="0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c" exitCode=0 Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.959176 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drxrq" event={"ID":"296d5f6b-220d-4eda-96e4-c405190f28dc","Type":"ContainerDied","Data":"0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c"} Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.959621 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drxrq" event={"ID":"296d5f6b-220d-4eda-96e4-c405190f28dc","Type":"ContainerDied","Data":"6de2c1173cadccb7b874d72d1d3862009696c41fda3c7af45ebce6e2f0fa3d9c"} Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.959237 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drxrq" Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.959658 5012 scope.go:117] "RemoveContainer" containerID="0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c" Feb 19 06:52:16 crc kubenswrapper[5012]: I0219 06:52:16.984095 5012 scope.go:117] "RemoveContainer" containerID="ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec" Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.001824 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-drxrq"] Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.008956 5012 scope.go:117] "RemoveContainer" containerID="07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1" Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.023083 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-drxrq"] Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.045470 5012 scope.go:117] "RemoveContainer" containerID="0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c" Feb 19 06:52:17 crc kubenswrapper[5012]: E0219 06:52:17.046014 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c\": container with ID starting with 0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c not found: ID does not exist" containerID="0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c" Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.046051 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c"} err="failed to get container status \"0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c\": rpc error: code = NotFound desc = could not find container \"0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c\": container with ID starting with 0022f372e4324b0d6bfa99608e930edc510abf6325bf47612a87faf72acbb99c not found: ID does not exist" Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.046074 5012 scope.go:117] "RemoveContainer" containerID="ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec" Feb 19 06:52:17 crc kubenswrapper[5012]: E0219 06:52:17.046659 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec\": container with ID starting with ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec not found: ID does not exist" containerID="ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec" Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.046726 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec"} err="failed to get container status \"ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec\": rpc error: code = NotFound desc = could not find container \"ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec\": container with ID starting with ae8d199276e68a6867c9c7da90f1049d8fa8ee3d11314b9d2f1221e0f562b6ec not found: ID does not exist" Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.046775 5012 scope.go:117] "RemoveContainer" containerID="07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1" Feb 19 06:52:17 crc kubenswrapper[5012]: E0219 06:52:17.047141 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1\": container with ID starting with 07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1 not found: ID does not exist" containerID="07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1" Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.047183 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1"} err="failed to get container status \"07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1\": rpc error: code = NotFound desc = could not find container \"07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1\": container with ID starting with 07d37d96170fca052c4e55adc050253b512b2b89a05dee4997c1f844a22690b1 not found: ID does not exist" Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.297431 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgkfc"] Feb 19 06:52:17 crc kubenswrapper[5012]: I0219 06:52:17.982658 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bgkfc" podUID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerName="registry-server" containerID="cri-o://b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d" gracePeriod=2 Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.477364 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.586222 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-catalog-content\") pod \"b884b761-ae1b-4cce-b4fb-478f3c847090\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.586469 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-utilities\") pod \"b884b761-ae1b-4cce-b4fb-478f3c847090\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.586635 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfjbs\" (UniqueName: \"kubernetes.io/projected/b884b761-ae1b-4cce-b4fb-478f3c847090-kube-api-access-gfjbs\") pod \"b884b761-ae1b-4cce-b4fb-478f3c847090\" (UID: \"b884b761-ae1b-4cce-b4fb-478f3c847090\") " Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.587388 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-utilities" (OuterVolumeSpecName: "utilities") pod "b884b761-ae1b-4cce-b4fb-478f3c847090" (UID: "b884b761-ae1b-4cce-b4fb-478f3c847090"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.594405 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b884b761-ae1b-4cce-b4fb-478f3c847090-kube-api-access-gfjbs" (OuterVolumeSpecName: "kube-api-access-gfjbs") pod "b884b761-ae1b-4cce-b4fb-478f3c847090" (UID: "b884b761-ae1b-4cce-b4fb-478f3c847090"). InnerVolumeSpecName "kube-api-access-gfjbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.620323 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b884b761-ae1b-4cce-b4fb-478f3c847090" (UID: "b884b761-ae1b-4cce-b4fb-478f3c847090"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.689276 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.689323 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b884b761-ae1b-4cce-b4fb-478f3c847090-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.689334 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfjbs\" (UniqueName: \"kubernetes.io/projected/b884b761-ae1b-4cce-b4fb-478f3c847090-kube-api-access-gfjbs\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:18 crc kubenswrapper[5012]: I0219 06:52:18.715465 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" path="/var/lib/kubelet/pods/296d5f6b-220d-4eda-96e4-c405190f28dc/volumes" Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.009027 5012 generic.go:334] "Generic (PLEG): container finished" podID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerID="b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d" exitCode=0 Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.009108 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgkfc" event={"ID":"b884b761-ae1b-4cce-b4fb-478f3c847090","Type":"ContainerDied","Data":"b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d"} Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.009154 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bgkfc" event={"ID":"b884b761-ae1b-4cce-b4fb-478f3c847090","Type":"ContainerDied","Data":"38010c4653b9ffc7e2858504be509a2397bd5c2505954ddda11f73e6e6a61c70"} Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.009199 5012 scope.go:117] "RemoveContainer" containerID="b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d" Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.009706 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bgkfc" Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.037081 5012 scope.go:117] "RemoveContainer" containerID="00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1" Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.064097 5012 scope.go:117] "RemoveContainer" containerID="f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362" Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.071315 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgkfc"] Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.082266 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bgkfc"] Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.109659 5012 scope.go:117] "RemoveContainer" containerID="b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d" Feb 19 06:52:19 crc kubenswrapper[5012]: E0219 06:52:19.116439 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d\": container with ID starting with b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d not found: ID does not exist" containerID="b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d" Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.116487 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d"} err="failed to get container status \"b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d\": rpc error: code = NotFound desc = could not find container \"b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d\": container with ID starting with b895ea6c76bd79b10f9529262b7aedc27514193c4d888a5dbe2b5c5b6caff45d not found: ID does not exist" Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.116514 5012 scope.go:117] "RemoveContainer" containerID="00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1" Feb 19 06:52:19 crc kubenswrapper[5012]: E0219 06:52:19.116996 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1\": container with ID starting with 00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1 not found: ID does not exist" containerID="00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1" Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.117034 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1"} err="failed to get container status \"00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1\": rpc error: code = NotFound desc = could not find container \"00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1\": container with ID starting with 00d00a7d3cf2933a58e722b997f53f4291d71f7cbeed04565384a4c2c2ef2fb1 not found: ID does not exist" Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.117058 5012 scope.go:117] "RemoveContainer" containerID="f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362" Feb 19 06:52:19 crc kubenswrapper[5012]: E0219 06:52:19.117431 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362\": container with ID starting with f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362 not found: ID does not exist" containerID="f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362" Feb 19 06:52:19 crc kubenswrapper[5012]: I0219 06:52:19.117456 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362"} err="failed to get container status \"f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362\": rpc error: code = NotFound desc = could not find container \"f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362\": container with ID starting with f7605391874e17ffb6899f4a0b119d065ff3be025a48d499e1d75e450bcc2362 not found: ID does not exist" Feb 19 06:52:20 crc kubenswrapper[5012]: I0219 06:52:20.704184 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:52:20 crc kubenswrapper[5012]: I0219 06:52:20.730316 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b884b761-ae1b-4cce-b4fb-478f3c847090" path="/var/lib/kubelet/pods/b884b761-ae1b-4cce-b4fb-478f3c847090/volumes" Feb 19 06:52:21 crc kubenswrapper[5012]: I0219 06:52:21.035598 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"48054e4bb9edb7bcb5d43f31e62ca380e81f64c675e1f2cd4a65b9f2238ff941"} Feb 19 06:52:25 crc kubenswrapper[5012]: I0219 06:52:25.070107 5012 generic.go:334] "Generic (PLEG): container finished" podID="809fe06a-5a2d-4ac8-90d0-5a2569f3e116" containerID="eee8e3c869cd66a7f0fdd02a2d1a9b68c170e21d37668f164d794773d0198ed5" exitCode=0 Feb 19 06:52:25 crc kubenswrapper[5012]: I0219 06:52:25.070148 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/crc-debug-57vjb" event={"ID":"809fe06a-5a2d-4ac8-90d0-5a2569f3e116","Type":"ContainerDied","Data":"eee8e3c869cd66a7f0fdd02a2d1a9b68c170e21d37668f164d794773d0198ed5"} Feb 19 06:52:26 crc kubenswrapper[5012]: I0219 06:52:26.223272 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-57vjb" Feb 19 06:52:26 crc kubenswrapper[5012]: I0219 06:52:26.248854 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjgqb\" (UniqueName: \"kubernetes.io/projected/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-kube-api-access-sjgqb\") pod \"809fe06a-5a2d-4ac8-90d0-5a2569f3e116\" (UID: \"809fe06a-5a2d-4ac8-90d0-5a2569f3e116\") " Feb 19 06:52:26 crc kubenswrapper[5012]: I0219 06:52:26.249519 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-host\") pod \"809fe06a-5a2d-4ac8-90d0-5a2569f3e116\" (UID: \"809fe06a-5a2d-4ac8-90d0-5a2569f3e116\") " Feb 19 06:52:26 crc kubenswrapper[5012]: I0219 06:52:26.250021 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-host" (OuterVolumeSpecName: "host") pod "809fe06a-5a2d-4ac8-90d0-5a2569f3e116" (UID: "809fe06a-5a2d-4ac8-90d0-5a2569f3e116"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 06:52:26 crc kubenswrapper[5012]: I0219 06:52:26.261353 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-kube-api-access-sjgqb" (OuterVolumeSpecName: "kube-api-access-sjgqb") pod "809fe06a-5a2d-4ac8-90d0-5a2569f3e116" (UID: "809fe06a-5a2d-4ac8-90d0-5a2569f3e116"). InnerVolumeSpecName "kube-api-access-sjgqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:52:26 crc kubenswrapper[5012]: I0219 06:52:26.271099 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hncx9/crc-debug-57vjb"] Feb 19 06:52:26 crc kubenswrapper[5012]: I0219 06:52:26.279688 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hncx9/crc-debug-57vjb"] Feb 19 06:52:26 crc kubenswrapper[5012]: I0219 06:52:26.352414 5012 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-host\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:26 crc kubenswrapper[5012]: I0219 06:52:26.352445 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjgqb\" (UniqueName: \"kubernetes.io/projected/809fe06a-5a2d-4ac8-90d0-5a2569f3e116-kube-api-access-sjgqb\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:26 crc kubenswrapper[5012]: I0219 06:52:26.717202 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="809fe06a-5a2d-4ac8-90d0-5a2569f3e116" path="/var/lib/kubelet/pods/809fe06a-5a2d-4ac8-90d0-5a2569f3e116/volumes" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.102254 5012 scope.go:117] "RemoveContainer" containerID="eee8e3c869cd66a7f0fdd02a2d1a9b68c170e21d37668f164d794773d0198ed5" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.102389 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-57vjb" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.515376 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hncx9/crc-debug-nxgm9"] Feb 19 06:52:27 crc kubenswrapper[5012]: E0219 06:52:27.515967 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerName="extract-content" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.515988 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerName="extract-content" Feb 19 06:52:27 crc kubenswrapper[5012]: E0219 06:52:27.516014 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerName="extract-utilities" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.516027 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerName="extract-utilities" Feb 19 06:52:27 crc kubenswrapper[5012]: E0219 06:52:27.516043 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerName="extract-utilities" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.516056 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerName="extract-utilities" Feb 19 06:52:27 crc kubenswrapper[5012]: E0219 06:52:27.516100 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerName="registry-server" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.516112 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerName="registry-server" Feb 19 06:52:27 crc kubenswrapper[5012]: E0219 06:52:27.516148 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerName="extract-content" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.516160 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerName="extract-content" Feb 19 06:52:27 crc kubenswrapper[5012]: E0219 06:52:27.516180 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerName="registry-server" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.516192 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerName="registry-server" Feb 19 06:52:27 crc kubenswrapper[5012]: E0219 06:52:27.516221 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="809fe06a-5a2d-4ac8-90d0-5a2569f3e116" containerName="container-00" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.516257 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="809fe06a-5a2d-4ac8-90d0-5a2569f3e116" containerName="container-00" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.516632 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="b884b761-ae1b-4cce-b4fb-478f3c847090" containerName="registry-server" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.516655 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="296d5f6b-220d-4eda-96e4-c405190f28dc" containerName="registry-server" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.516687 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="809fe06a-5a2d-4ac8-90d0-5a2569f3e116" containerName="container-00" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.517730 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-nxgm9" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.578585 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhl5r\" (UniqueName: \"kubernetes.io/projected/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-kube-api-access-xhl5r\") pod \"crc-debug-nxgm9\" (UID: \"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0\") " pod="openshift-must-gather-hncx9/crc-debug-nxgm9" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.578671 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-host\") pod \"crc-debug-nxgm9\" (UID: \"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0\") " pod="openshift-must-gather-hncx9/crc-debug-nxgm9" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.682049 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhl5r\" (UniqueName: \"kubernetes.io/projected/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-kube-api-access-xhl5r\") pod \"crc-debug-nxgm9\" (UID: \"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0\") " pod="openshift-must-gather-hncx9/crc-debug-nxgm9" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.683250 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-host\") pod \"crc-debug-nxgm9\" (UID: \"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0\") " pod="openshift-must-gather-hncx9/crc-debug-nxgm9" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.683438 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-host\") pod \"crc-debug-nxgm9\" (UID: \"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0\") " pod="openshift-must-gather-hncx9/crc-debug-nxgm9" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.715037 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhl5r\" (UniqueName: \"kubernetes.io/projected/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-kube-api-access-xhl5r\") pod \"crc-debug-nxgm9\" (UID: \"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0\") " pod="openshift-must-gather-hncx9/crc-debug-nxgm9" Feb 19 06:52:27 crc kubenswrapper[5012]: I0219 06:52:27.850190 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-nxgm9" Feb 19 06:52:28 crc kubenswrapper[5012]: I0219 06:52:28.112364 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/crc-debug-nxgm9" event={"ID":"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0","Type":"ContainerStarted","Data":"3cefb9b619fd38672097387c5f39889450729554bf7d24dcd7f7df0f70d0fe02"} Feb 19 06:52:29 crc kubenswrapper[5012]: I0219 06:52:29.123487 5012 generic.go:334] "Generic (PLEG): container finished" podID="4f23c1e7-2ab4-499a-b925-f11ed5aff7d0" containerID="5f63a56f4608620beb3ed89096dc42c009d98d6c2c1d04eec01e0fcc60d308fd" exitCode=0 Feb 19 06:52:29 crc kubenswrapper[5012]: I0219 06:52:29.123555 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/crc-debug-nxgm9" event={"ID":"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0","Type":"ContainerDied","Data":"5f63a56f4608620beb3ed89096dc42c009d98d6c2c1d04eec01e0fcc60d308fd"} Feb 19 06:52:30 crc kubenswrapper[5012]: I0219 06:52:30.336564 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-nxgm9" Feb 19 06:52:30 crc kubenswrapper[5012]: I0219 06:52:30.450381 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-host\") pod \"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0\" (UID: \"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0\") " Feb 19 06:52:30 crc kubenswrapper[5012]: I0219 06:52:30.450464 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhl5r\" (UniqueName: \"kubernetes.io/projected/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-kube-api-access-xhl5r\") pod \"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0\" (UID: \"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0\") " Feb 19 06:52:30 crc kubenswrapper[5012]: I0219 06:52:30.450836 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-host" (OuterVolumeSpecName: "host") pod "4f23c1e7-2ab4-499a-b925-f11ed5aff7d0" (UID: "4f23c1e7-2ab4-499a-b925-f11ed5aff7d0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 06:52:30 crc kubenswrapper[5012]: I0219 06:52:30.465550 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-kube-api-access-xhl5r" (OuterVolumeSpecName: "kube-api-access-xhl5r") pod "4f23c1e7-2ab4-499a-b925-f11ed5aff7d0" (UID: "4f23c1e7-2ab4-499a-b925-f11ed5aff7d0"). InnerVolumeSpecName "kube-api-access-xhl5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:52:30 crc kubenswrapper[5012]: I0219 06:52:30.552215 5012 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-host\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:30 crc kubenswrapper[5012]: I0219 06:52:30.552249 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhl5r\" (UniqueName: \"kubernetes.io/projected/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0-kube-api-access-xhl5r\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:31 crc kubenswrapper[5012]: I0219 06:52:31.145346 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/crc-debug-nxgm9" event={"ID":"4f23c1e7-2ab4-499a-b925-f11ed5aff7d0","Type":"ContainerDied","Data":"3cefb9b619fd38672097387c5f39889450729554bf7d24dcd7f7df0f70d0fe02"} Feb 19 06:52:31 crc kubenswrapper[5012]: I0219 06:52:31.145643 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cefb9b619fd38672097387c5f39889450729554bf7d24dcd7f7df0f70d0fe02" Feb 19 06:52:31 crc kubenswrapper[5012]: I0219 06:52:31.145453 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-nxgm9" Feb 19 06:52:31 crc kubenswrapper[5012]: I0219 06:52:31.495342 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hncx9/crc-debug-nxgm9"] Feb 19 06:52:31 crc kubenswrapper[5012]: I0219 06:52:31.506185 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hncx9/crc-debug-nxgm9"] Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.723284 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f23c1e7-2ab4-499a-b925-f11ed5aff7d0" path="/var/lib/kubelet/pods/4f23c1e7-2ab4-499a-b925-f11ed5aff7d0/volumes" Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.724186 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hncx9/crc-debug-9hbl5"] Feb 19 06:52:32 crc kubenswrapper[5012]: E0219 06:52:32.724676 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f23c1e7-2ab4-499a-b925-f11ed5aff7d0" containerName="container-00" Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.724702 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f23c1e7-2ab4-499a-b925-f11ed5aff7d0" containerName="container-00" Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.724978 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f23c1e7-2ab4-499a-b925-f11ed5aff7d0" containerName="container-00" Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.726440 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-9hbl5" Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.800853 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxm25\" (UniqueName: \"kubernetes.io/projected/2e79e07d-bc20-4488-8ebe-4805bf39854e-kube-api-access-xxm25\") pod \"crc-debug-9hbl5\" (UID: \"2e79e07d-bc20-4488-8ebe-4805bf39854e\") " pod="openshift-must-gather-hncx9/crc-debug-9hbl5" Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.800932 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e79e07d-bc20-4488-8ebe-4805bf39854e-host\") pod \"crc-debug-9hbl5\" (UID: \"2e79e07d-bc20-4488-8ebe-4805bf39854e\") " pod="openshift-must-gather-hncx9/crc-debug-9hbl5" Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.903031 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxm25\" (UniqueName: \"kubernetes.io/projected/2e79e07d-bc20-4488-8ebe-4805bf39854e-kube-api-access-xxm25\") pod \"crc-debug-9hbl5\" (UID: \"2e79e07d-bc20-4488-8ebe-4805bf39854e\") " pod="openshift-must-gather-hncx9/crc-debug-9hbl5" Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.903386 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e79e07d-bc20-4488-8ebe-4805bf39854e-host\") pod \"crc-debug-9hbl5\" (UID: \"2e79e07d-bc20-4488-8ebe-4805bf39854e\") " pod="openshift-must-gather-hncx9/crc-debug-9hbl5" Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.903556 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e79e07d-bc20-4488-8ebe-4805bf39854e-host\") pod \"crc-debug-9hbl5\" (UID: \"2e79e07d-bc20-4488-8ebe-4805bf39854e\") " pod="openshift-must-gather-hncx9/crc-debug-9hbl5" Feb 19 06:52:32 crc kubenswrapper[5012]: I0219 06:52:32.929889 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxm25\" (UniqueName: \"kubernetes.io/projected/2e79e07d-bc20-4488-8ebe-4805bf39854e-kube-api-access-xxm25\") pod \"crc-debug-9hbl5\" (UID: \"2e79e07d-bc20-4488-8ebe-4805bf39854e\") " pod="openshift-must-gather-hncx9/crc-debug-9hbl5" Feb 19 06:52:33 crc kubenswrapper[5012]: I0219 06:52:33.053279 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-9hbl5" Feb 19 06:52:33 crc kubenswrapper[5012]: W0219 06:52:33.114572 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e79e07d_bc20_4488_8ebe_4805bf39854e.slice/crio-f25c286c5c0d69eed91d8be86c8ad827a202b08882a99cf863524f0a82aeb1c5 WatchSource:0}: Error finding container f25c286c5c0d69eed91d8be86c8ad827a202b08882a99cf863524f0a82aeb1c5: Status 404 returned error can't find the container with id f25c286c5c0d69eed91d8be86c8ad827a202b08882a99cf863524f0a82aeb1c5 Feb 19 06:52:33 crc kubenswrapper[5012]: I0219 06:52:33.169132 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/crc-debug-9hbl5" event={"ID":"2e79e07d-bc20-4488-8ebe-4805bf39854e","Type":"ContainerStarted","Data":"f25c286c5c0d69eed91d8be86c8ad827a202b08882a99cf863524f0a82aeb1c5"} Feb 19 06:52:34 crc kubenswrapper[5012]: I0219 06:52:34.189655 5012 generic.go:334] "Generic (PLEG): container finished" podID="2e79e07d-bc20-4488-8ebe-4805bf39854e" containerID="c71bf472cad9749e4de982b5b389c0bde1476a8315392a2d6a49a409e617e7e5" exitCode=0 Feb 19 06:52:34 crc kubenswrapper[5012]: I0219 06:52:34.189776 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/crc-debug-9hbl5" event={"ID":"2e79e07d-bc20-4488-8ebe-4805bf39854e","Type":"ContainerDied","Data":"c71bf472cad9749e4de982b5b389c0bde1476a8315392a2d6a49a409e617e7e5"} Feb 19 06:52:34 crc kubenswrapper[5012]: I0219 06:52:34.254522 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hncx9/crc-debug-9hbl5"] Feb 19 06:52:34 crc kubenswrapper[5012]: I0219 06:52:34.269356 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hncx9/crc-debug-9hbl5"] Feb 19 06:52:35 crc kubenswrapper[5012]: I0219 06:52:35.344382 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-9hbl5" Feb 19 06:52:35 crc kubenswrapper[5012]: I0219 06:52:35.370065 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e79e07d-bc20-4488-8ebe-4805bf39854e-host" (OuterVolumeSpecName: "host") pod "2e79e07d-bc20-4488-8ebe-4805bf39854e" (UID: "2e79e07d-bc20-4488-8ebe-4805bf39854e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 06:52:35 crc kubenswrapper[5012]: I0219 06:52:35.370335 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e79e07d-bc20-4488-8ebe-4805bf39854e-host\") pod \"2e79e07d-bc20-4488-8ebe-4805bf39854e\" (UID: \"2e79e07d-bc20-4488-8ebe-4805bf39854e\") " Feb 19 06:52:35 crc kubenswrapper[5012]: I0219 06:52:35.370435 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxm25\" (UniqueName: \"kubernetes.io/projected/2e79e07d-bc20-4488-8ebe-4805bf39854e-kube-api-access-xxm25\") pod \"2e79e07d-bc20-4488-8ebe-4805bf39854e\" (UID: \"2e79e07d-bc20-4488-8ebe-4805bf39854e\") " Feb 19 06:52:35 crc kubenswrapper[5012]: I0219 06:52:35.371486 5012 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2e79e07d-bc20-4488-8ebe-4805bf39854e-host\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:35 crc kubenswrapper[5012]: I0219 06:52:35.380468 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e79e07d-bc20-4488-8ebe-4805bf39854e-kube-api-access-xxm25" (OuterVolumeSpecName: "kube-api-access-xxm25") pod "2e79e07d-bc20-4488-8ebe-4805bf39854e" (UID: "2e79e07d-bc20-4488-8ebe-4805bf39854e"). InnerVolumeSpecName "kube-api-access-xxm25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:52:35 crc kubenswrapper[5012]: I0219 06:52:35.473427 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxm25\" (UniqueName: \"kubernetes.io/projected/2e79e07d-bc20-4488-8ebe-4805bf39854e-kube-api-access-xxm25\") on node \"crc\" DevicePath \"\"" Feb 19 06:52:36 crc kubenswrapper[5012]: I0219 06:52:36.216821 5012 scope.go:117] "RemoveContainer" containerID="c71bf472cad9749e4de982b5b389c0bde1476a8315392a2d6a49a409e617e7e5" Feb 19 06:52:36 crc kubenswrapper[5012]: I0219 06:52:36.216881 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/crc-debug-9hbl5" Feb 19 06:52:36 crc kubenswrapper[5012]: I0219 06:52:36.714156 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e79e07d-bc20-4488-8ebe-4805bf39854e" path="/var/lib/kubelet/pods/2e79e07d-bc20-4488-8ebe-4805bf39854e/volumes" Feb 19 06:53:14 crc kubenswrapper[5012]: I0219 06:53:14.981108 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f669f7d76-2qg4s_875bbaf1-6c43-4474-9f7b-8202b2d5ee1c/barbican-api/0.log" Feb 19 06:53:15 crc kubenswrapper[5012]: I0219 06:53:15.103363 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f669f7d76-2qg4s_875bbaf1-6c43-4474-9f7b-8202b2d5ee1c/barbican-api-log/0.log" Feb 19 06:53:15 crc kubenswrapper[5012]: I0219 06:53:15.187371 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bb75756b-hd4xs_ee216ad2-2baf-4bba-a3fe-81acf9218af0/barbican-keystone-listener/0.log" Feb 19 06:53:15 crc kubenswrapper[5012]: I0219 06:53:15.345322 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bb75756b-hd4xs_ee216ad2-2baf-4bba-a3fe-81acf9218af0/barbican-keystone-listener-log/0.log" Feb 19 06:53:15 crc kubenswrapper[5012]: I0219 06:53:15.415601 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-779bfc8b79-ffj7v_9133f0f1-2d9e-462e-ba56-8a206f61bd03/barbican-worker/0.log" Feb 19 06:53:15 crc kubenswrapper[5012]: I0219 06:53:15.511767 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-779bfc8b79-ffj7v_9133f0f1-2d9e-462e-ba56-8a206f61bd03/barbican-worker-log/0.log" Feb 19 06:53:15 crc kubenswrapper[5012]: I0219 06:53:15.624113 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb_ebf47868-aec9-4f2e-8c08-499161f45b18/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:15 crc kubenswrapper[5012]: I0219 06:53:15.809100 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9647feae-5291-41e1-9bb4-631f661552b9/ceilometer-central-agent/0.log" Feb 19 06:53:15 crc kubenswrapper[5012]: I0219 06:53:15.922646 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9647feae-5291-41e1-9bb4-631f661552b9/proxy-httpd/0.log" Feb 19 06:53:15 crc kubenswrapper[5012]: I0219 06:53:15.923744 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9647feae-5291-41e1-9bb4-631f661552b9/ceilometer-notification-agent/0.log" Feb 19 06:53:15 crc kubenswrapper[5012]: I0219 06:53:15.928264 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9647feae-5291-41e1-9bb4-631f661552b9/sg-core/0.log" Feb 19 06:53:16 crc kubenswrapper[5012]: I0219 06:53:16.127610 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4c548edc-6755-4310-9b8d-780a384ec6bd/cinder-api-log/0.log" Feb 19 06:53:16 crc kubenswrapper[5012]: I0219 06:53:16.332126 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_42946b07-c256-43a7-99d0-45f94c019663/cinder-scheduler/0.log" Feb 19 06:53:16 crc kubenswrapper[5012]: I0219 06:53:16.344340 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4c548edc-6755-4310-9b8d-780a384ec6bd/cinder-api/0.log" Feb 19 06:53:16 crc kubenswrapper[5012]: I0219 06:53:16.388781 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_42946b07-c256-43a7-99d0-45f94c019663/probe/0.log" Feb 19 06:53:16 crc kubenswrapper[5012]: I0219 06:53:16.556182 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-8sh74_a37d4335-7c06-4fa3-af51-6cfe6fb9a020/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:16 crc kubenswrapper[5012]: I0219 06:53:16.669886 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-bg5db_8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:16 crc kubenswrapper[5012]: I0219 06:53:16.800866 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-567c7bc999-cgf2v_c2eab861-ab13-4ab1-b57f-fecf9e95b9be/init/0.log" Feb 19 06:53:16 crc kubenswrapper[5012]: I0219 06:53:16.968680 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-567c7bc999-cgf2v_c2eab861-ab13-4ab1-b57f-fecf9e95b9be/init/0.log" Feb 19 06:53:17 crc kubenswrapper[5012]: I0219 06:53:17.080146 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-l597r_02358307-dba6-44fa-9799-2440b1496c55/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:17 crc kubenswrapper[5012]: I0219 06:53:17.130207 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-567c7bc999-cgf2v_c2eab861-ab13-4ab1-b57f-fecf9e95b9be/dnsmasq-dns/0.log" Feb 19 06:53:17 crc kubenswrapper[5012]: I0219 06:53:17.283788 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8cfddc12-1c4c-4faf-9edb-71fb80608785/glance-httpd/0.log" Feb 19 06:53:17 crc kubenswrapper[5012]: I0219 06:53:17.321519 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8cfddc12-1c4c-4faf-9edb-71fb80608785/glance-log/0.log" Feb 19 06:53:17 crc kubenswrapper[5012]: I0219 06:53:17.451409 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f55309b7-09e5-4496-8995-f03681386729/glance-httpd/0.log" Feb 19 06:53:17 crc kubenswrapper[5012]: I0219 06:53:17.480811 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f55309b7-09e5-4496-8995-f03681386729/glance-log/0.log" Feb 19 06:53:17 crc kubenswrapper[5012]: I0219 06:53:17.769098 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cdcb467fb-8tvnz_6c937bbe-f068-4e5b-81ad-9455104062da/horizon/0.log" Feb 19 06:53:17 crc kubenswrapper[5012]: I0219 06:53:17.810988 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5_d869003b-7b03-4a8b-9f9c-73ca0ec4f359/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:18 crc kubenswrapper[5012]: I0219 06:53:18.096948 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-kjhk7_0037b322-99bb-4ae2-aba4-85ddcd8243ae/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:18 crc kubenswrapper[5012]: I0219 06:53:18.248104 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cdcb467fb-8tvnz_6c937bbe-f068-4e5b-81ad-9455104062da/horizon-log/0.log" Feb 19 06:53:18 crc kubenswrapper[5012]: I0219 06:53:18.358907 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29524681-x9bcr_86c7e36d-88e3-432a-ad6f-74de626c5f30/keystone-cron/0.log" Feb 19 06:53:18 crc kubenswrapper[5012]: I0219 06:53:18.592270 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_cc79bf66-4a34-43fe-ad03-4e6ce60d2c44/kube-state-metrics/0.log" Feb 19 06:53:18 crc kubenswrapper[5012]: I0219 06:53:18.680777 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7b574779c9-x2bsv_0e0a6a9f-d11f-4084-9742-7780b20fae75/keystone-api/0.log" Feb 19 06:53:18 crc kubenswrapper[5012]: I0219 06:53:18.759015 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-2n79s_fcace677-35b0-499f-998c-99168fbfa0af/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:19 crc kubenswrapper[5012]: I0219 06:53:19.186504 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2_534720dc-6ff8-4fdc-9337-6fe77ad1eaa8/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:19 crc kubenswrapper[5012]: I0219 06:53:19.277119 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5ff88b6c7c-5bg66_eb805277-3dfc-4810-9845-3ba928d262c2/neutron-httpd/0.log" Feb 19 06:53:19 crc kubenswrapper[5012]: I0219 06:53:19.287868 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5ff88b6c7c-5bg66_eb805277-3dfc-4810-9845-3ba928d262c2/neutron-api/0.log" Feb 19 06:53:19 crc kubenswrapper[5012]: I0219 06:53:19.434242 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_3c628866-f96d-4e7b-8846-7073c98dd389/setup-container/0.log" Feb 19 06:53:19 crc kubenswrapper[5012]: I0219 06:53:19.637831 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_3c628866-f96d-4e7b-8846-7073c98dd389/rabbitmq/0.log" Feb 19 06:53:19 crc kubenswrapper[5012]: I0219 06:53:19.665593 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_3c628866-f96d-4e7b-8846-7073c98dd389/setup-container/0.log" Feb 19 06:53:20 crc kubenswrapper[5012]: I0219 06:53:20.658143 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_6852caab-c1b6-40cd-b5df-88d22f6016bd/nova-cell0-conductor-conductor/0.log" Feb 19 06:53:20 crc kubenswrapper[5012]: I0219 06:53:20.812434 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_aceef718-9d1c-441d-bf1b-92c0a6831def/nova-cell1-conductor-conductor/0.log" Feb 19 06:53:21 crc kubenswrapper[5012]: I0219 06:53:21.023188 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c1a529b0-65f7-4680-a4fd-4dacebc1ab83/nova-api-log/0.log" Feb 19 06:53:21 crc kubenswrapper[5012]: I0219 06:53:21.117040 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_661e04e4-4ba2-4ea0-9ba6-3af2949e7e21/nova-cell1-novncproxy-novncproxy/0.log" Feb 19 06:53:21 crc kubenswrapper[5012]: I0219 06:53:21.303967 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-p67w4_a6116441-2985-4723-9889-6c3422159243/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:21 crc kubenswrapper[5012]: I0219 06:53:21.340803 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c1a529b0-65f7-4680-a4fd-4dacebc1ab83/nova-api-api/0.log" Feb 19 06:53:21 crc kubenswrapper[5012]: I0219 06:53:21.414597 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_396b18f9-9859-4b42-aca1-c29c3724c86c/nova-metadata-log/0.log" Feb 19 06:53:21 crc kubenswrapper[5012]: I0219 06:53:21.883210 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0/nova-scheduler-scheduler/0.log" Feb 19 06:53:22 crc kubenswrapper[5012]: I0219 06:53:22.182685 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_04466d10-2177-4361-bd86-333c046b9e52/mysql-bootstrap/0.log" Feb 19 06:53:22 crc kubenswrapper[5012]: I0219 06:53:22.421649 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_04466d10-2177-4361-bd86-333c046b9e52/galera/0.log" Feb 19 06:53:22 crc kubenswrapper[5012]: I0219 06:53:22.424100 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_04466d10-2177-4361-bd86-333c046b9e52/mysql-bootstrap/0.log" Feb 19 06:53:22 crc kubenswrapper[5012]: I0219 06:53:22.648669 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1fd0c672-e258-4feb-8bbd-26135f92f7fb/mysql-bootstrap/0.log" Feb 19 06:53:22 crc kubenswrapper[5012]: I0219 06:53:22.840000 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1fd0c672-e258-4feb-8bbd-26135f92f7fb/mysql-bootstrap/0.log" Feb 19 06:53:22 crc kubenswrapper[5012]: I0219 06:53:22.919774 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1fd0c672-e258-4feb-8bbd-26135f92f7fb/galera/0.log" Feb 19 06:53:23 crc kubenswrapper[5012]: I0219 06:53:23.002005 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_75258dbe-c223-4e55-92a6-8e588745294a/openstackclient/0.log" Feb 19 06:53:23 crc kubenswrapper[5012]: I0219 06:53:23.152501 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-cr94m_e2c9ac17-43ef-4ccb-83b1-e20ee03289de/ovn-controller/0.log" Feb 19 06:53:23 crc kubenswrapper[5012]: I0219 06:53:23.375588 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mz9j9_c711491e-0b8b-4737-88c9-bc5e37051ac1/openstack-network-exporter/0.log" Feb 19 06:53:23 crc kubenswrapper[5012]: I0219 06:53:23.499942 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_396b18f9-9859-4b42-aca1-c29c3724c86c/nova-metadata-metadata/0.log" Feb 19 06:53:23 crc kubenswrapper[5012]: I0219 06:53:23.527482 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7qdpg_16fbaba1-bd32-4121-8743-99422db74180/ovsdb-server-init/0.log" Feb 19 06:53:23 crc kubenswrapper[5012]: I0219 06:53:23.748583 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7qdpg_16fbaba1-bd32-4121-8743-99422db74180/ovsdb-server-init/0.log" Feb 19 06:53:23 crc kubenswrapper[5012]: I0219 06:53:23.795624 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7qdpg_16fbaba1-bd32-4121-8743-99422db74180/ovsdb-server/0.log" Feb 19 06:53:23 crc kubenswrapper[5012]: I0219 06:53:23.992767 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-gxxmx_7335769e-5b13-4d1b-8aa7-e7f192ee9e2b/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:24 crc kubenswrapper[5012]: I0219 06:53:24.050524 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e3e8f67d-0748-4bff-b7c5-8432c7e4ab64/openstack-network-exporter/0.log" Feb 19 06:53:24 crc kubenswrapper[5012]: I0219 06:53:24.150441 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7qdpg_16fbaba1-bd32-4121-8743-99422db74180/ovs-vswitchd/0.log" Feb 19 06:53:24 crc kubenswrapper[5012]: I0219 06:53:24.189469 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e3e8f67d-0748-4bff-b7c5-8432c7e4ab64/ovn-northd/0.log" Feb 19 06:53:24 crc kubenswrapper[5012]: I0219 06:53:24.336545 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5a9e6735-4159-4248-a8f5-6714d386901a/openstack-network-exporter/0.log" Feb 19 06:53:24 crc kubenswrapper[5012]: I0219 06:53:24.377909 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5a9e6735-4159-4248-a8f5-6714d386901a/ovsdbserver-nb/0.log" Feb 19 06:53:24 crc kubenswrapper[5012]: I0219 06:53:24.490041 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_00790bd0-5fbb-4927-8361-085c9691c171/openstack-network-exporter/0.log" Feb 19 06:53:24 crc kubenswrapper[5012]: I0219 06:53:24.558059 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_00790bd0-5fbb-4927-8361-085c9691c171/ovsdbserver-sb/0.log" Feb 19 06:53:24 crc kubenswrapper[5012]: I0219 06:53:24.852898 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f94997dd8-cvnfv_b0ce1e0a-4e51-408c-b3f8-500cf6476b96/placement-api/0.log" Feb 19 06:53:24 crc kubenswrapper[5012]: I0219 06:53:24.885651 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a64b2810-4982-43ef-ae9f-1e7852394d60/init-config-reloader/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.027954 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f94997dd8-cvnfv_b0ce1e0a-4e51-408c-b3f8-500cf6476b96/placement-log/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.075719 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a64b2810-4982-43ef-ae9f-1e7852394d60/config-reloader/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.081742 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a64b2810-4982-43ef-ae9f-1e7852394d60/prometheus/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.098880 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a64b2810-4982-43ef-ae9f-1e7852394d60/init-config-reloader/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.296662 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a64b2810-4982-43ef-ae9f-1e7852394d60/thanos-sidecar/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.306397 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4984f0c1-33e8-4506-b6d7-e554dca0e4c8/setup-container/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.530397 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4984f0c1-33e8-4506-b6d7-e554dca0e4c8/setup-container/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.629940 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c3230f97-dbe4-42a2-b009-a8370c601e78/setup-container/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.636655 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4984f0c1-33e8-4506-b6d7-e554dca0e4c8/rabbitmq/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.889717 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c3230f97-dbe4-42a2-b009-a8370c601e78/setup-container/0.log" Feb 19 06:53:25 crc kubenswrapper[5012]: I0219 06:53:25.901842 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c3230f97-dbe4-42a2-b009-a8370c601e78/rabbitmq/0.log" Feb 19 06:53:26 crc kubenswrapper[5012]: I0219 06:53:26.011385 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs_464de984-0dd6-4c4d-aed3-afbf84e0cdcf/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:26 crc kubenswrapper[5012]: I0219 06:53:26.231006 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-skvzd_07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:26 crc kubenswrapper[5012]: I0219 06:53:26.322494 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-pl267_61bd41ab-cfea-4df2-9be0-8321c6c11ebd/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:26 crc kubenswrapper[5012]: I0219 06:53:26.500031 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-7xnxl_86b984ed-bd52-4348-9415-dccff4a0e1a4/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:26 crc kubenswrapper[5012]: I0219 06:53:26.586874 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-9rlns_f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc/ssh-known-hosts-edpm-deployment/0.log" Feb 19 06:53:26 crc kubenswrapper[5012]: I0219 06:53:26.852554 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-59bfbf7475-v98h9_4c9aa274-240d-4d50-b38a-754dd493f351/proxy-server/0.log" Feb 19 06:53:26 crc kubenswrapper[5012]: I0219 06:53:26.987931 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-59bfbf7475-v98h9_4c9aa274-240d-4d50-b38a-754dd493f351/proxy-httpd/0.log" Feb 19 06:53:26 crc kubenswrapper[5012]: I0219 06:53:26.993997 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-5vxhd_d05da3bc-6c22-4956-9fab-331eed79d175/swift-ring-rebalance/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.287623 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/account-replicator/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.423163 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/account-auditor/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.441140 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/account-reaper/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.609618 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/account-server/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.609855 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/container-auditor/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.671205 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/container-replicator/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.706596 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/container-server/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.863468 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/object-auditor/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.869165 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/container-updater/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.925049 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/object-expirer/0.log" Feb 19 06:53:27 crc kubenswrapper[5012]: I0219 06:53:27.977721 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/object-replicator/0.log" Feb 19 06:53:28 crc kubenswrapper[5012]: I0219 06:53:28.071430 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/object-server/0.log" Feb 19 06:53:28 crc kubenswrapper[5012]: I0219 06:53:28.074700 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/object-updater/0.log" Feb 19 06:53:28 crc kubenswrapper[5012]: I0219 06:53:28.162895 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/rsync/0.log" Feb 19 06:53:28 crc kubenswrapper[5012]: I0219 06:53:28.219446 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/swift-recon-cron/0.log" Feb 19 06:53:28 crc kubenswrapper[5012]: I0219 06:53:28.407858 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx_73fe066f-3ee6-4ffc-aeb4-874c14fb0b84/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:28 crc kubenswrapper[5012]: I0219 06:53:28.432170 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_54eccb09-b3ec-45bc-8065-4c5eb9516257/tempest-tests-tempest-tests-runner/0.log" Feb 19 06:53:28 crc kubenswrapper[5012]: I0219 06:53:28.628314 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_78c125a8-bf69-4524-9b70-be9fe9f313e7/test-operator-logs-container/0.log" Feb 19 06:53:28 crc kubenswrapper[5012]: I0219 06:53:28.687956 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6_cdccd552-e703-4d8d-86b4-ff481671527f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 06:53:29 crc kubenswrapper[5012]: I0219 06:53:29.421742 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_d4778529-f7d0-482b-bd67-003aaa7ca0ae/watcher-applier/0.log" Feb 19 06:53:30 crc kubenswrapper[5012]: I0219 06:53:30.124101 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_7d74d5de-7e1d-47cc-8aaa-cb303332a03a/watcher-api-log/0.log" Feb 19 06:53:32 crc kubenswrapper[5012]: I0219 06:53:32.602141 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_f87036fc-fa94-4038-8b65-bb85d8ff6f10/watcher-decision-engine/0.log" Feb 19 06:53:33 crc kubenswrapper[5012]: I0219 06:53:33.825595 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_7d74d5de-7e1d-47cc-8aaa-cb303332a03a/watcher-api/0.log" Feb 19 06:53:42 crc kubenswrapper[5012]: I0219 06:53:42.887204 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_38a4a51f-c380-48fc-8f0e-cdd1ea09fa53/memcached/0.log" Feb 19 06:54:03 crc kubenswrapper[5012]: I0219 06:54:03.428471 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/util/0.log" Feb 19 06:54:03 crc kubenswrapper[5012]: I0219 06:54:03.560539 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/util/0.log" Feb 19 06:54:03 crc kubenswrapper[5012]: I0219 06:54:03.590518 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/pull/0.log" Feb 19 06:54:03 crc kubenswrapper[5012]: I0219 06:54:03.594704 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/pull/0.log" Feb 19 06:54:03 crc kubenswrapper[5012]: I0219 06:54:03.767887 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/pull/0.log" Feb 19 06:54:03 crc kubenswrapper[5012]: I0219 06:54:03.805660 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/extract/0.log" Feb 19 06:54:03 crc kubenswrapper[5012]: I0219 06:54:03.818998 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/util/0.log" Feb 19 06:54:04 crc kubenswrapper[5012]: I0219 06:54:04.310101 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-kt4nw_11d49fcd-6e31-47e5-84a1-c6ae972e13cb/manager/0.log" Feb 19 06:54:04 crc kubenswrapper[5012]: I0219 06:54:04.704021 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-qzq7x_8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be/manager/0.log" Feb 19 06:54:04 crc kubenswrapper[5012]: I0219 06:54:04.770914 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-5szxp_bfca307c-9b00-4c12-bdd6-a394b7cc7cfd/manager/0.log" Feb 19 06:54:05 crc kubenswrapper[5012]: I0219 06:54:05.024250 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-csct6_4f281b5b-b656-4d4a-b628-d4bfe4fc94f9/manager/0.log" Feb 19 06:54:05 crc kubenswrapper[5012]: I0219 06:54:05.560812 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-dgldv_8629b5e4-e6a8-4c47-b76b-f58a26b42912/manager/0.log" Feb 19 06:54:05 crc kubenswrapper[5012]: I0219 06:54:05.913216 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-cp8kx_996bfd61-486b-432d-9e09-d3a90ff9124c/manager/0.log" Feb 19 06:54:06 crc kubenswrapper[5012]: I0219 06:54:06.073351 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-9zkvx_dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43/manager/0.log" Feb 19 06:54:06 crc kubenswrapper[5012]: I0219 06:54:06.294987 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-ldrx5_e9e07b56-2724-4046-8a60-81b751fb0588/manager/0.log" Feb 19 06:54:06 crc kubenswrapper[5012]: I0219 06:54:06.509723 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-rpbt8_1e872b11-03d6-4d3f-8e06-e10e1e73d917/manager/0.log" Feb 19 06:54:06 crc kubenswrapper[5012]: I0219 06:54:06.689642 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-556xv_8af03a54-ad7a-4684-b5a6-ba83f410e6ed/manager/0.log" Feb 19 06:54:06 crc kubenswrapper[5012]: I0219 06:54:06.857092 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-27hfc_b123191d-e55b-4ddc-90ea-abcb34c97be2/manager/0.log" Feb 19 06:54:07 crc kubenswrapper[5012]: I0219 06:54:07.027043 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-l65c5_457202a7-ae9f-4d06-8690-d220e532b305/manager/0.log" Feb 19 06:54:07 crc kubenswrapper[5012]: I0219 06:54:07.243163 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4_d6eb3922-90e6-4bb1-8caa-aac6b69c76b0/manager/0.log" Feb 19 06:54:07 crc kubenswrapper[5012]: I0219 06:54:07.506623 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6679bf9b57-q57bk_76b34ac4-96f1-4bbc-9969-eb3e1cfc2159/operator/0.log" Feb 19 06:54:07 crc kubenswrapper[5012]: I0219 06:54:07.836279 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cl447_797c14cf-1b4d-4b4e-9dc5-4843e2e77cef/registry-server/0.log" Feb 19 06:54:08 crc kubenswrapper[5012]: I0219 06:54:08.146856 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-25qtj_10e6fa53-581b-4965-8a38-c70a5c61c6d7/manager/0.log" Feb 19 06:54:08 crc kubenswrapper[5012]: I0219 06:54:08.368763 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-nlqtw_08a4f79c-e42e-4609-b104-01b9a05ac95a/manager/0.log" Feb 19 06:54:08 crc kubenswrapper[5012]: I0219 06:54:08.724341 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-mqc2w_4a3cde05-282a-4c65-9570-74d04c71a034/operator/0.log" Feb 19 06:54:08 crc kubenswrapper[5012]: I0219 06:54:08.946675 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-6hfg4_c55ed223-371b-409a-bcb6-8ca6d2a3c908/manager/0.log" Feb 19 06:54:09 crc kubenswrapper[5012]: I0219 06:54:09.382016 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-pcpk8_73e25e30-860d-4faf-b1f3-bc284f7189d1/manager/0.log" Feb 19 06:54:09 crc kubenswrapper[5012]: I0219 06:54:09.488274 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-qjpw6_49d66f3b-e451-4b73-bc6a-4b854a71a4d6/manager/0.log" Feb 19 06:54:09 crc kubenswrapper[5012]: I0219 06:54:09.774535 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-69ff7bc449-tj54n_d1f124a8-4132-458d-a5a5-1839d31e7772/manager/0.log" Feb 19 06:54:09 crc kubenswrapper[5012]: I0219 06:54:09.779490 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-z5r47_739941d0-4bff-4dae-8f01-636386a37dd0/manager/0.log" Feb 19 06:54:10 crc kubenswrapper[5012]: I0219 06:54:10.072038 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-pqrs7_ef60eda4-7ead-499b-b70f-07a34574096f/manager/0.log" Feb 19 06:54:16 crc kubenswrapper[5012]: I0219 06:54:16.190494 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-xzk2n_0cc1b41b-fbf6-4d0c-b721-dcad09c03feb/manager/0.log" Feb 19 06:54:32 crc kubenswrapper[5012]: I0219 06:54:32.989924 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mbxqf_9102ddf1-e140-48e7-9ecd-14a4c007f5d5/control-plane-machine-set-operator/0.log" Feb 19 06:54:33 crc kubenswrapper[5012]: I0219 06:54:33.163807 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6qvzq_5c537eae-5a27-4a4d-ba9e-0fd7efe72f37/kube-rbac-proxy/0.log" Feb 19 06:54:33 crc kubenswrapper[5012]: I0219 06:54:33.233542 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6qvzq_5c537eae-5a27-4a4d-ba9e-0fd7efe72f37/machine-api-operator/0.log" Feb 19 06:54:44 crc kubenswrapper[5012]: I0219 06:54:44.430230 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:54:44 crc kubenswrapper[5012]: I0219 06:54:44.430871 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:54:48 crc kubenswrapper[5012]: I0219 06:54:48.940791 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-sq68l_3c776e3c-32bf-4f6d-89b7-75bc3e1d3e02/cert-manager-controller/0.log" Feb 19 06:54:49 crc kubenswrapper[5012]: I0219 06:54:49.086775 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-w66zf_4b5870bd-8fb3-4eef-a893-f31ce8bb1506/cert-manager-cainjector/0.log" Feb 19 06:54:49 crc kubenswrapper[5012]: I0219 06:54:49.132239 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-drndq_53138562-0907-4b72-b228-21ef0c561f57/cert-manager-webhook/0.log" Feb 19 06:55:04 crc kubenswrapper[5012]: I0219 06:55:04.372143 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-zvl62_0aad4d6c-fc60-4843-b21b-d4ad6d552d5f/nmstate-console-plugin/0.log" Feb 19 06:55:05 crc kubenswrapper[5012]: I0219 06:55:05.130461 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tdz8p_4b5e9e17-84bc-4d05-87f9-328826ea39df/nmstate-handler/0.log" Feb 19 06:55:05 crc kubenswrapper[5012]: I0219 06:55:05.188758 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-hn274_91d45b3f-23b3-4342-8168-667f665ffe82/kube-rbac-proxy/0.log" Feb 19 06:55:05 crc kubenswrapper[5012]: I0219 06:55:05.239036 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-hn274_91d45b3f-23b3-4342-8168-667f665ffe82/nmstate-metrics/0.log" Feb 19 06:55:05 crc kubenswrapper[5012]: I0219 06:55:05.340019 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-2smgj_d6ac1260-4ff8-4025-af6e-35711452ef6f/nmstate-operator/0.log" Feb 19 06:55:05 crc kubenswrapper[5012]: I0219 06:55:05.456814 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-mqtfh_50749fb3-e43e-4874-a0ea-8dabae225f85/nmstate-webhook/0.log" Feb 19 06:55:14 crc kubenswrapper[5012]: I0219 06:55:14.430994 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:55:14 crc kubenswrapper[5012]: I0219 06:55:14.431660 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:55:21 crc kubenswrapper[5012]: I0219 06:55:21.130704 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-9t66t_9f3d925a-f08d-4e92-baf3-805f27c9ae35/prometheus-operator/0.log" Feb 19 06:55:21 crc kubenswrapper[5012]: I0219 06:55:21.302870 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-685558f558-cddcp_9364b7f3-e3e3-4432-a4e7-4b80c9a50225/prometheus-operator-admission-webhook/0.log" Feb 19 06:55:21 crc kubenswrapper[5012]: I0219 06:55:21.339388 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-685558f558-rlcjg_3c60bb85-2242-4d9f-95f9-27b2e747727d/prometheus-operator-admission-webhook/0.log" Feb 19 06:55:21 crc kubenswrapper[5012]: I0219 06:55:21.491536 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-vw7xl_63ee166b-5027-4928-9196-9488685f87d5/operator/0.log" Feb 19 06:55:21 crc kubenswrapper[5012]: I0219 06:55:21.518824 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-5grbr_86bcbf15-9553-41af-974c-3418e588e575/perses-operator/0.log" Feb 19 06:55:39 crc kubenswrapper[5012]: I0219 06:55:39.847959 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-c4jbq_fe949ecf-1cb7-47c7-b196-d4851f142c5f/kube-rbac-proxy/0.log" Feb 19 06:55:39 crc kubenswrapper[5012]: I0219 06:55:39.910743 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-c4jbq_fe949ecf-1cb7-47c7-b196-d4851f142c5f/controller/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.060078 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-frr-files/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.261288 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-frr-files/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.261292 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-metrics/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.277434 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-reloader/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.291054 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-reloader/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.454753 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-reloader/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.461110 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-frr-files/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.468966 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-metrics/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.478564 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-metrics/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.692437 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-frr-files/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.715614 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/controller/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.736921 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-reloader/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.752370 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-metrics/0.log" Feb 19 06:55:40 crc kubenswrapper[5012]: I0219 06:55:40.972568 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/frr-metrics/0.log" Feb 19 06:55:41 crc kubenswrapper[5012]: I0219 06:55:41.043225 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/kube-rbac-proxy/0.log" Feb 19 06:55:41 crc kubenswrapper[5012]: I0219 06:55:41.043419 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/kube-rbac-proxy-frr/0.log" Feb 19 06:55:41 crc kubenswrapper[5012]: I0219 06:55:41.195227 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/reloader/0.log" Feb 19 06:55:41 crc kubenswrapper[5012]: I0219 06:55:41.291666 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-hdb84_431a9bf4-479e-4255-9664-554c80fa4376/frr-k8s-webhook-server/0.log" Feb 19 06:55:41 crc kubenswrapper[5012]: I0219 06:55:41.502189 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-558c5c4774-9r4gj_05b78fff-bf4d-4cd6-aba9-b74303a5dd50/manager/0.log" Feb 19 06:55:41 crc kubenswrapper[5012]: I0219 06:55:41.672802 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-699bc447bd-zqv74_ec7fdada-6f6e-4d8b-b2e1-c944050c714c/webhook-server/0.log" Feb 19 06:55:41 crc kubenswrapper[5012]: I0219 06:55:41.957724 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-87ct4_82cb6684-3937-45f8-9f18-56940e88f480/kube-rbac-proxy/0.log" Feb 19 06:55:42 crc kubenswrapper[5012]: I0219 06:55:42.568223 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-87ct4_82cb6684-3937-45f8-9f18-56940e88f480/speaker/0.log" Feb 19 06:55:42 crc kubenswrapper[5012]: I0219 06:55:42.578826 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/frr/0.log" Feb 19 06:55:44 crc kubenswrapper[5012]: I0219 06:55:44.430904 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:55:44 crc kubenswrapper[5012]: I0219 06:55:44.431766 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:55:44 crc kubenswrapper[5012]: I0219 06:55:44.431857 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:55:44 crc kubenswrapper[5012]: I0219 06:55:44.433058 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"48054e4bb9edb7bcb5d43f31e62ca380e81f64c675e1f2cd4a65b9f2238ff941"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:55:44 crc kubenswrapper[5012]: I0219 06:55:44.433173 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://48054e4bb9edb7bcb5d43f31e62ca380e81f64c675e1f2cd4a65b9f2238ff941" gracePeriod=600 Feb 19 06:55:45 crc kubenswrapper[5012]: I0219 06:55:45.151681 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="48054e4bb9edb7bcb5d43f31e62ca380e81f64c675e1f2cd4a65b9f2238ff941" exitCode=0 Feb 19 06:55:45 crc kubenswrapper[5012]: I0219 06:55:45.151726 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"48054e4bb9edb7bcb5d43f31e62ca380e81f64c675e1f2cd4a65b9f2238ff941"} Feb 19 06:55:45 crc kubenswrapper[5012]: I0219 06:55:45.152575 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35"} Feb 19 06:55:45 crc kubenswrapper[5012]: I0219 06:55:45.152600 5012 scope.go:117] "RemoveContainer" containerID="b7488da2bf9b9623eca7cf375195b217eca919bc301c080c307fb7a0e0591dc3" Feb 19 06:55:59 crc kubenswrapper[5012]: I0219 06:55:59.515702 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/util/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.108926 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/pull/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.117725 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/pull/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.133406 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/util/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.313117 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/util/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.333565 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/pull/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.338094 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/extract/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.496130 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/util/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.653437 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/util/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.666147 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/pull/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.681466 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/pull/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.877004 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/pull/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.892428 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/extract/0.log" Feb 19 06:56:00 crc kubenswrapper[5012]: I0219 06:56:00.927546 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/util/0.log" Feb 19 06:56:01 crc kubenswrapper[5012]: I0219 06:56:01.745456 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-utilities/0.log" Feb 19 06:56:01 crc kubenswrapper[5012]: I0219 06:56:01.919551 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-content/0.log" Feb 19 06:56:01 crc kubenswrapper[5012]: I0219 06:56:01.921757 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-utilities/0.log" Feb 19 06:56:01 crc kubenswrapper[5012]: I0219 06:56:01.983004 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-content/0.log" Feb 19 06:56:02 crc kubenswrapper[5012]: I0219 06:56:02.145578 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-content/0.log" Feb 19 06:56:02 crc kubenswrapper[5012]: I0219 06:56:02.159002 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-utilities/0.log" Feb 19 06:56:02 crc kubenswrapper[5012]: I0219 06:56:02.399015 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-utilities/0.log" Feb 19 06:56:02 crc kubenswrapper[5012]: I0219 06:56:02.682938 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-utilities/0.log" Feb 19 06:56:02 crc kubenswrapper[5012]: I0219 06:56:02.689791 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-content/0.log" Feb 19 06:56:02 crc kubenswrapper[5012]: I0219 06:56:02.708425 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-content/0.log" Feb 19 06:56:02 crc kubenswrapper[5012]: I0219 06:56:02.862946 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/registry-server/0.log" Feb 19 06:56:02 crc kubenswrapper[5012]: I0219 06:56:02.876516 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-utilities/0.log" Feb 19 06:56:02 crc kubenswrapper[5012]: I0219 06:56:02.918591 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-content/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.102040 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/util/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.238828 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/registry-server/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.273106 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/util/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.297629 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/pull/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.312726 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/pull/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.460120 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/pull/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.482146 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/util/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.491343 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/extract/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.525209 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-jqjls_800f8349-6ef3-44ae-90a0-56c89ca82479/marketplace-operator/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.668134 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-utilities/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.838474 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-utilities/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.857500 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-content/0.log" Feb 19 06:56:03 crc kubenswrapper[5012]: I0219 06:56:03.871755 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-content/0.log" Feb 19 06:56:04 crc kubenswrapper[5012]: I0219 06:56:04.009590 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-utilities/0.log" Feb 19 06:56:04 crc kubenswrapper[5012]: I0219 06:56:04.032661 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-content/0.log" Feb 19 06:56:04 crc kubenswrapper[5012]: I0219 06:56:04.119066 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-utilities/0.log" Feb 19 06:56:04 crc kubenswrapper[5012]: I0219 06:56:04.217268 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/registry-server/0.log" Feb 19 06:56:04 crc kubenswrapper[5012]: I0219 06:56:04.482078 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-content/0.log" Feb 19 06:56:04 crc kubenswrapper[5012]: I0219 06:56:04.482230 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-content/0.log" Feb 19 06:56:04 crc kubenswrapper[5012]: I0219 06:56:04.505760 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-utilities/0.log" Feb 19 06:56:04 crc kubenswrapper[5012]: I0219 06:56:04.684688 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-content/0.log" Feb 19 06:56:04 crc kubenswrapper[5012]: I0219 06:56:04.695327 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-utilities/0.log" Feb 19 06:56:05 crc kubenswrapper[5012]: I0219 06:56:05.310074 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/registry-server/0.log" Feb 19 06:56:21 crc kubenswrapper[5012]: I0219 06:56:21.158430 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-9t66t_9f3d925a-f08d-4e92-baf3-805f27c9ae35/prometheus-operator/0.log" Feb 19 06:56:21 crc kubenswrapper[5012]: I0219 06:56:21.189825 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-685558f558-cddcp_9364b7f3-e3e3-4432-a4e7-4b80c9a50225/prometheus-operator-admission-webhook/0.log" Feb 19 06:56:21 crc kubenswrapper[5012]: I0219 06:56:21.192732 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-685558f558-rlcjg_3c60bb85-2242-4d9f-95f9-27b2e747727d/prometheus-operator-admission-webhook/0.log" Feb 19 06:56:21 crc kubenswrapper[5012]: I0219 06:56:21.218592 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-vw7xl_63ee166b-5027-4928-9196-9488685f87d5/operator/0.log" Feb 19 06:56:21 crc kubenswrapper[5012]: I0219 06:56:21.321669 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-5grbr_86bcbf15-9553-41af-974c-3418e588e575/perses-operator/0.log" Feb 19 06:57:44 crc kubenswrapper[5012]: I0219 06:57:44.431194 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:57:44 crc kubenswrapper[5012]: I0219 06:57:44.432062 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:58:14 crc kubenswrapper[5012]: I0219 06:58:14.430543 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:58:14 crc kubenswrapper[5012]: I0219 06:58:14.431279 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:58:19 crc kubenswrapper[5012]: I0219 06:58:19.947187 5012 generic.go:334] "Generic (PLEG): container finished" podID="5afd9390-aa19-4b48-b659-089e59ea82e5" containerID="1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828" exitCode=0 Feb 19 06:58:19 crc kubenswrapper[5012]: I0219 06:58:19.947365 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hncx9/must-gather-gs9fs" event={"ID":"5afd9390-aa19-4b48-b659-089e59ea82e5","Type":"ContainerDied","Data":"1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828"} Feb 19 06:58:19 crc kubenswrapper[5012]: I0219 06:58:19.948442 5012 scope.go:117] "RemoveContainer" containerID="1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828" Feb 19 06:58:20 crc kubenswrapper[5012]: I0219 06:58:20.473433 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hncx9_must-gather-gs9fs_5afd9390-aa19-4b48-b659-089e59ea82e5/gather/0.log" Feb 19 06:58:28 crc kubenswrapper[5012]: I0219 06:58:28.495382 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hncx9/must-gather-gs9fs"] Feb 19 06:58:28 crc kubenswrapper[5012]: I0219 06:58:28.496625 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-hncx9/must-gather-gs9fs" podUID="5afd9390-aa19-4b48-b659-089e59ea82e5" containerName="copy" containerID="cri-o://7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48" gracePeriod=2 Feb 19 06:58:28 crc kubenswrapper[5012]: I0219 06:58:28.511047 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hncx9/must-gather-gs9fs"] Feb 19 06:58:28 crc kubenswrapper[5012]: I0219 06:58:28.906459 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hncx9_must-gather-gs9fs_5afd9390-aa19-4b48-b659-089e59ea82e5/copy/0.log" Feb 19 06:58:28 crc kubenswrapper[5012]: I0219 06:58:28.907472 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/must-gather-gs9fs" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.037703 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzz4l\" (UniqueName: \"kubernetes.io/projected/5afd9390-aa19-4b48-b659-089e59ea82e5-kube-api-access-vzz4l\") pod \"5afd9390-aa19-4b48-b659-089e59ea82e5\" (UID: \"5afd9390-aa19-4b48-b659-089e59ea82e5\") " Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.037766 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5afd9390-aa19-4b48-b659-089e59ea82e5-must-gather-output\") pod \"5afd9390-aa19-4b48-b659-089e59ea82e5\" (UID: \"5afd9390-aa19-4b48-b659-089e59ea82e5\") " Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.044274 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5afd9390-aa19-4b48-b659-089e59ea82e5-kube-api-access-vzz4l" (OuterVolumeSpecName: "kube-api-access-vzz4l") pod "5afd9390-aa19-4b48-b659-089e59ea82e5" (UID: "5afd9390-aa19-4b48-b659-089e59ea82e5"). InnerVolumeSpecName "kube-api-access-vzz4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.049004 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hncx9_must-gather-gs9fs_5afd9390-aa19-4b48-b659-089e59ea82e5/copy/0.log" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.049338 5012 generic.go:334] "Generic (PLEG): container finished" podID="5afd9390-aa19-4b48-b659-089e59ea82e5" containerID="7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48" exitCode=143 Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.049383 5012 scope.go:117] "RemoveContainer" containerID="7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.049487 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hncx9/must-gather-gs9fs" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.109873 5012 scope.go:117] "RemoveContainer" containerID="1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.142036 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzz4l\" (UniqueName: \"kubernetes.io/projected/5afd9390-aa19-4b48-b659-089e59ea82e5-kube-api-access-vzz4l\") on node \"crc\" DevicePath \"\"" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.200825 5012 scope.go:117] "RemoveContainer" containerID="7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48" Feb 19 06:58:29 crc kubenswrapper[5012]: E0219 06:58:29.201282 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48\": container with ID starting with 7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48 not found: ID does not exist" containerID="7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.201337 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48"} err="failed to get container status \"7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48\": rpc error: code = NotFound desc = could not find container \"7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48\": container with ID starting with 7fbf3aca94d6983be6771c3709f2ffd360f9816cfd60f22bf27ff44fda7b1c48 not found: ID does not exist" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.201364 5012 scope.go:117] "RemoveContainer" containerID="1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828" Feb 19 06:58:29 crc kubenswrapper[5012]: E0219 06:58:29.201612 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828\": container with ID starting with 1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828 not found: ID does not exist" containerID="1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.201634 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828"} err="failed to get container status \"1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828\": rpc error: code = NotFound desc = could not find container \"1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828\": container with ID starting with 1aca46ae29c1dd4e8b9aef648da698d41f997bbcb28aea4b218dee86e7f9f828 not found: ID does not exist" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.237063 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5afd9390-aa19-4b48-b659-089e59ea82e5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "5afd9390-aa19-4b48-b659-089e59ea82e5" (UID: "5afd9390-aa19-4b48-b659-089e59ea82e5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 06:58:29 crc kubenswrapper[5012]: I0219 06:58:29.246633 5012 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5afd9390-aa19-4b48-b659-089e59ea82e5-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 06:58:30 crc kubenswrapper[5012]: I0219 06:58:30.714663 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5afd9390-aa19-4b48-b659-089e59ea82e5" path="/var/lib/kubelet/pods/5afd9390-aa19-4b48-b659-089e59ea82e5/volumes" Feb 19 06:58:43 crc kubenswrapper[5012]: I0219 06:58:43.303105 5012 scope.go:117] "RemoveContainer" containerID="5f63a56f4608620beb3ed89096dc42c009d98d6c2c1d04eec01e0fcc60d308fd" Feb 19 06:58:44 crc kubenswrapper[5012]: I0219 06:58:44.430942 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 06:58:44 crc kubenswrapper[5012]: I0219 06:58:44.432256 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 06:58:44 crc kubenswrapper[5012]: I0219 06:58:44.432480 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 06:58:44 crc kubenswrapper[5012]: I0219 06:58:44.433604 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 06:58:44 crc kubenswrapper[5012]: I0219 06:58:44.433837 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" gracePeriod=600 Feb 19 06:58:44 crc kubenswrapper[5012]: E0219 06:58:44.571044 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:58:45 crc kubenswrapper[5012]: I0219 06:58:45.271921 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" exitCode=0 Feb 19 06:58:45 crc kubenswrapper[5012]: I0219 06:58:45.271979 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35"} Feb 19 06:58:45 crc kubenswrapper[5012]: I0219 06:58:45.272011 5012 scope.go:117] "RemoveContainer" containerID="48054e4bb9edb7bcb5d43f31e62ca380e81f64c675e1f2cd4a65b9f2238ff941" Feb 19 06:58:45 crc kubenswrapper[5012]: I0219 06:58:45.273023 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 06:58:45 crc kubenswrapper[5012]: E0219 06:58:45.273656 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:58:58 crc kubenswrapper[5012]: I0219 06:58:58.704215 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 06:58:58 crc kubenswrapper[5012]: E0219 06:58:58.706192 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:59:12 crc kubenswrapper[5012]: I0219 06:59:12.705610 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 06:59:12 crc kubenswrapper[5012]: E0219 06:59:12.706373 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:59:23 crc kubenswrapper[5012]: I0219 06:59:23.704329 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 06:59:23 crc kubenswrapper[5012]: E0219 06:59:23.705485 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:59:37 crc kubenswrapper[5012]: I0219 06:59:37.703621 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 06:59:37 crc kubenswrapper[5012]: E0219 06:59:37.704804 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 06:59:52 crc kubenswrapper[5012]: I0219 06:59:52.703610 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 06:59:52 crc kubenswrapper[5012]: E0219 06:59:52.704367 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.170996 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk"] Feb 19 07:00:00 crc kubenswrapper[5012]: E0219 07:00:00.172827 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5afd9390-aa19-4b48-b659-089e59ea82e5" containerName="gather" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.172863 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5afd9390-aa19-4b48-b659-089e59ea82e5" containerName="gather" Feb 19 07:00:00 crc kubenswrapper[5012]: E0219 07:00:00.172892 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5afd9390-aa19-4b48-b659-089e59ea82e5" containerName="copy" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.172913 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="5afd9390-aa19-4b48-b659-089e59ea82e5" containerName="copy" Feb 19 07:00:00 crc kubenswrapper[5012]: E0219 07:00:00.172974 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e79e07d-bc20-4488-8ebe-4805bf39854e" containerName="container-00" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.172991 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e79e07d-bc20-4488-8ebe-4805bf39854e" containerName="container-00" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.173498 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="5afd9390-aa19-4b48-b659-089e59ea82e5" containerName="copy" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.173550 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e79e07d-bc20-4488-8ebe-4805bf39854e" containerName="container-00" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.173596 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="5afd9390-aa19-4b48-b659-089e59ea82e5" containerName="gather" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.175275 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.178547 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.178956 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.188371 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk"] Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.313187 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a395030-9ca2-4aae-b3d5-1a1a58029659-config-volume\") pod \"collect-profiles-29524740-w74xk\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.313268 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hvdm\" (UniqueName: \"kubernetes.io/projected/6a395030-9ca2-4aae-b3d5-1a1a58029659-kube-api-access-4hvdm\") pod \"collect-profiles-29524740-w74xk\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.313352 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a395030-9ca2-4aae-b3d5-1a1a58029659-secret-volume\") pod \"collect-profiles-29524740-w74xk\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.415557 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a395030-9ca2-4aae-b3d5-1a1a58029659-secret-volume\") pod \"collect-profiles-29524740-w74xk\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.415733 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a395030-9ca2-4aae-b3d5-1a1a58029659-config-volume\") pod \"collect-profiles-29524740-w74xk\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.415805 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hvdm\" (UniqueName: \"kubernetes.io/projected/6a395030-9ca2-4aae-b3d5-1a1a58029659-kube-api-access-4hvdm\") pod \"collect-profiles-29524740-w74xk\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.416669 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a395030-9ca2-4aae-b3d5-1a1a58029659-config-volume\") pod \"collect-profiles-29524740-w74xk\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.435102 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a395030-9ca2-4aae-b3d5-1a1a58029659-secret-volume\") pod \"collect-profiles-29524740-w74xk\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.440361 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hvdm\" (UniqueName: \"kubernetes.io/projected/6a395030-9ca2-4aae-b3d5-1a1a58029659-kube-api-access-4hvdm\") pod \"collect-profiles-29524740-w74xk\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.505935 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:00 crc kubenswrapper[5012]: I0219 07:00:00.987066 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk"] Feb 19 07:00:01 crc kubenswrapper[5012]: I0219 07:00:01.254573 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" event={"ID":"6a395030-9ca2-4aae-b3d5-1a1a58029659","Type":"ContainerStarted","Data":"f1dc67997b3e5078839b18d57a364d21c0305f67aed62572ceddbff97fbef117"} Feb 19 07:00:01 crc kubenswrapper[5012]: I0219 07:00:01.254903 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" event={"ID":"6a395030-9ca2-4aae-b3d5-1a1a58029659","Type":"ContainerStarted","Data":"1cd3a44b5b379c621a9322bfcda0f41950bd313bd82a9045cbd6665063592022"} Feb 19 07:00:01 crc kubenswrapper[5012]: I0219 07:00:01.272008 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" podStartSLOduration=1.271992097 podStartE2EDuration="1.271992097s" podCreationTimestamp="2026-02-19 07:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 07:00:01.267154549 +0000 UTC m=+5697.300477118" watchObservedRunningTime="2026-02-19 07:00:01.271992097 +0000 UTC m=+5697.305314666" Feb 19 07:00:02 crc kubenswrapper[5012]: I0219 07:00:02.270155 5012 generic.go:334] "Generic (PLEG): container finished" podID="6a395030-9ca2-4aae-b3d5-1a1a58029659" containerID="f1dc67997b3e5078839b18d57a364d21c0305f67aed62572ceddbff97fbef117" exitCode=0 Feb 19 07:00:02 crc kubenswrapper[5012]: I0219 07:00:02.270595 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" event={"ID":"6a395030-9ca2-4aae-b3d5-1a1a58029659","Type":"ContainerDied","Data":"f1dc67997b3e5078839b18d57a364d21c0305f67aed62572ceddbff97fbef117"} Feb 19 07:00:03 crc kubenswrapper[5012]: I0219 07:00:03.657696 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:03 crc kubenswrapper[5012]: I0219 07:00:03.797216 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a395030-9ca2-4aae-b3d5-1a1a58029659-config-volume\") pod \"6a395030-9ca2-4aae-b3d5-1a1a58029659\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " Feb 19 07:00:03 crc kubenswrapper[5012]: I0219 07:00:03.797480 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a395030-9ca2-4aae-b3d5-1a1a58029659-secret-volume\") pod \"6a395030-9ca2-4aae-b3d5-1a1a58029659\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " Feb 19 07:00:03 crc kubenswrapper[5012]: I0219 07:00:03.797668 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hvdm\" (UniqueName: \"kubernetes.io/projected/6a395030-9ca2-4aae-b3d5-1a1a58029659-kube-api-access-4hvdm\") pod \"6a395030-9ca2-4aae-b3d5-1a1a58029659\" (UID: \"6a395030-9ca2-4aae-b3d5-1a1a58029659\") " Feb 19 07:00:03 crc kubenswrapper[5012]: I0219 07:00:03.797793 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a395030-9ca2-4aae-b3d5-1a1a58029659-config-volume" (OuterVolumeSpecName: "config-volume") pod "6a395030-9ca2-4aae-b3d5-1a1a58029659" (UID: "6a395030-9ca2-4aae-b3d5-1a1a58029659"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 07:00:03 crc kubenswrapper[5012]: I0219 07:00:03.798781 5012 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a395030-9ca2-4aae-b3d5-1a1a58029659-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 07:00:03 crc kubenswrapper[5012]: I0219 07:00:03.804507 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a395030-9ca2-4aae-b3d5-1a1a58029659-kube-api-access-4hvdm" (OuterVolumeSpecName: "kube-api-access-4hvdm") pod "6a395030-9ca2-4aae-b3d5-1a1a58029659" (UID: "6a395030-9ca2-4aae-b3d5-1a1a58029659"). InnerVolumeSpecName "kube-api-access-4hvdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:00:03 crc kubenswrapper[5012]: I0219 07:00:03.805502 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a395030-9ca2-4aae-b3d5-1a1a58029659-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6a395030-9ca2-4aae-b3d5-1a1a58029659" (UID: "6a395030-9ca2-4aae-b3d5-1a1a58029659"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 07:00:03 crc kubenswrapper[5012]: I0219 07:00:03.901726 5012 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a395030-9ca2-4aae-b3d5-1a1a58029659-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 07:00:03 crc kubenswrapper[5012]: I0219 07:00:03.901768 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hvdm\" (UniqueName: \"kubernetes.io/projected/6a395030-9ca2-4aae-b3d5-1a1a58029659-kube-api-access-4hvdm\") on node \"crc\" DevicePath \"\"" Feb 19 07:00:04 crc kubenswrapper[5012]: I0219 07:00:04.316925 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" event={"ID":"6a395030-9ca2-4aae-b3d5-1a1a58029659","Type":"ContainerDied","Data":"1cd3a44b5b379c621a9322bfcda0f41950bd313bd82a9045cbd6665063592022"} Feb 19 07:00:04 crc kubenswrapper[5012]: I0219 07:00:04.316975 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524740-w74xk" Feb 19 07:00:04 crc kubenswrapper[5012]: I0219 07:00:04.316978 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cd3a44b5b379c621a9322bfcda0f41950bd313bd82a9045cbd6665063592022" Feb 19 07:00:04 crc kubenswrapper[5012]: I0219 07:00:04.386187 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt"] Feb 19 07:00:04 crc kubenswrapper[5012]: I0219 07:00:04.403838 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524695-rb8tt"] Feb 19 07:00:04 crc kubenswrapper[5012]: I0219 07:00:04.717567 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d1557b7-91d6-4aac-8306-59d97142a76c" path="/var/lib/kubelet/pods/0d1557b7-91d6-4aac-8306-59d97142a76c/volumes" Feb 19 07:00:07 crc kubenswrapper[5012]: I0219 07:00:07.703563 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:00:07 crc kubenswrapper[5012]: E0219 07:00:07.704538 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:00:21 crc kubenswrapper[5012]: I0219 07:00:21.704391 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:00:21 crc kubenswrapper[5012]: E0219 07:00:21.705674 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:00:32 crc kubenswrapper[5012]: I0219 07:00:32.707374 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:00:32 crc kubenswrapper[5012]: E0219 07:00:32.708392 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:00:43 crc kubenswrapper[5012]: I0219 07:00:43.420973 5012 scope.go:117] "RemoveContainer" containerID="1a2cb819f1490aeaeb6e29cd5e196789ce8e9978f4d9987b6edfc7cea46ee158" Feb 19 07:00:44 crc kubenswrapper[5012]: I0219 07:00:44.715047 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:00:44 crc kubenswrapper[5012]: E0219 07:00:44.715878 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:00:55 crc kubenswrapper[5012]: I0219 07:00:55.703158 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:00:55 crc kubenswrapper[5012]: E0219 07:00:55.704335 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.164603 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29524741-6zcg8"] Feb 19 07:01:00 crc kubenswrapper[5012]: E0219 07:01:00.165661 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a395030-9ca2-4aae-b3d5-1a1a58029659" containerName="collect-profiles" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.165678 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a395030-9ca2-4aae-b3d5-1a1a58029659" containerName="collect-profiles" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.165942 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a395030-9ca2-4aae-b3d5-1a1a58029659" containerName="collect-profiles" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.166786 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.180993 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524741-6zcg8"] Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.269937 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-combined-ca-bundle\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.270039 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-fernet-keys\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.270241 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-config-data\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.270287 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgmbw\" (UniqueName: \"kubernetes.io/projected/033cc9db-2d87-48a6-8854-4d3a922a38d2-kube-api-access-kgmbw\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.372627 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-combined-ca-bundle\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.372712 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-fernet-keys\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.372919 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-config-data\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.372963 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgmbw\" (UniqueName: \"kubernetes.io/projected/033cc9db-2d87-48a6-8854-4d3a922a38d2-kube-api-access-kgmbw\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.381705 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-config-data\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.381719 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-fernet-keys\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.387737 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-combined-ca-bundle\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.391603 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgmbw\" (UniqueName: \"kubernetes.io/projected/033cc9db-2d87-48a6-8854-4d3a922a38d2-kube-api-access-kgmbw\") pod \"keystone-cron-29524741-6zcg8\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:00 crc kubenswrapper[5012]: I0219 07:01:00.499742 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:01 crc kubenswrapper[5012]: I0219 07:01:01.033942 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29524741-6zcg8"] Feb 19 07:01:02 crc kubenswrapper[5012]: I0219 07:01:02.032209 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524741-6zcg8" event={"ID":"033cc9db-2d87-48a6-8854-4d3a922a38d2","Type":"ContainerStarted","Data":"010802bde907dd8632f62a4a6bfc6c99ea00d590f5191615ac5b4d8228c987b0"} Feb 19 07:01:02 crc kubenswrapper[5012]: I0219 07:01:02.032570 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524741-6zcg8" event={"ID":"033cc9db-2d87-48a6-8854-4d3a922a38d2","Type":"ContainerStarted","Data":"d041cb0aa86c139e4c0d207a084924091052e1d734602446ac5dfe526869bf4a"} Feb 19 07:01:02 crc kubenswrapper[5012]: I0219 07:01:02.054812 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29524741-6zcg8" podStartSLOduration=2.054795178 podStartE2EDuration="2.054795178s" podCreationTimestamp="2026-02-19 07:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 07:01:02.049311635 +0000 UTC m=+5758.082634204" watchObservedRunningTime="2026-02-19 07:01:02.054795178 +0000 UTC m=+5758.088117747" Feb 19 07:01:06 crc kubenswrapper[5012]: I0219 07:01:06.078048 5012 generic.go:334] "Generic (PLEG): container finished" podID="033cc9db-2d87-48a6-8854-4d3a922a38d2" containerID="010802bde907dd8632f62a4a6bfc6c99ea00d590f5191615ac5b4d8228c987b0" exitCode=0 Feb 19 07:01:06 crc kubenswrapper[5012]: I0219 07:01:06.078509 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524741-6zcg8" event={"ID":"033cc9db-2d87-48a6-8854-4d3a922a38d2","Type":"ContainerDied","Data":"010802bde907dd8632f62a4a6bfc6c99ea00d590f5191615ac5b4d8228c987b0"} Feb 19 07:01:06 crc kubenswrapper[5012]: I0219 07:01:06.703691 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:01:06 crc kubenswrapper[5012]: E0219 07:01:06.704507 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.548401 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.648870 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-fernet-keys\") pod \"033cc9db-2d87-48a6-8854-4d3a922a38d2\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.648952 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-config-data\") pod \"033cc9db-2d87-48a6-8854-4d3a922a38d2\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.648986 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgmbw\" (UniqueName: \"kubernetes.io/projected/033cc9db-2d87-48a6-8854-4d3a922a38d2-kube-api-access-kgmbw\") pod \"033cc9db-2d87-48a6-8854-4d3a922a38d2\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.649304 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-combined-ca-bundle\") pod \"033cc9db-2d87-48a6-8854-4d3a922a38d2\" (UID: \"033cc9db-2d87-48a6-8854-4d3a922a38d2\") " Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.661234 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "033cc9db-2d87-48a6-8854-4d3a922a38d2" (UID: "033cc9db-2d87-48a6-8854-4d3a922a38d2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.661830 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/033cc9db-2d87-48a6-8854-4d3a922a38d2-kube-api-access-kgmbw" (OuterVolumeSpecName: "kube-api-access-kgmbw") pod "033cc9db-2d87-48a6-8854-4d3a922a38d2" (UID: "033cc9db-2d87-48a6-8854-4d3a922a38d2"). InnerVolumeSpecName "kube-api-access-kgmbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.695235 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "033cc9db-2d87-48a6-8854-4d3a922a38d2" (UID: "033cc9db-2d87-48a6-8854-4d3a922a38d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.718639 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-config-data" (OuterVolumeSpecName: "config-data") pod "033cc9db-2d87-48a6-8854-4d3a922a38d2" (UID: "033cc9db-2d87-48a6-8854-4d3a922a38d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.752061 5012 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.752104 5012 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.752116 5012 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/033cc9db-2d87-48a6-8854-4d3a922a38d2-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 07:01:07 crc kubenswrapper[5012]: I0219 07:01:07.752128 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgmbw\" (UniqueName: \"kubernetes.io/projected/033cc9db-2d87-48a6-8854-4d3a922a38d2-kube-api-access-kgmbw\") on node \"crc\" DevicePath \"\"" Feb 19 07:01:08 crc kubenswrapper[5012]: I0219 07:01:08.106614 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29524741-6zcg8" event={"ID":"033cc9db-2d87-48a6-8854-4d3a922a38d2","Type":"ContainerDied","Data":"d041cb0aa86c139e4c0d207a084924091052e1d734602446ac5dfe526869bf4a"} Feb 19 07:01:08 crc kubenswrapper[5012]: I0219 07:01:08.106691 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d041cb0aa86c139e4c0d207a084924091052e1d734602446ac5dfe526869bf4a" Feb 19 07:01:08 crc kubenswrapper[5012]: I0219 07:01:08.106700 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29524741-6zcg8" Feb 19 07:01:18 crc kubenswrapper[5012]: I0219 07:01:18.703336 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:01:18 crc kubenswrapper[5012]: E0219 07:01:18.704549 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:01:32 crc kubenswrapper[5012]: I0219 07:01:32.703845 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:01:32 crc kubenswrapper[5012]: E0219 07:01:32.704930 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.607486 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xcmr2"] Feb 19 07:01:38 crc kubenswrapper[5012]: E0219 07:01:38.608458 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="033cc9db-2d87-48a6-8854-4d3a922a38d2" containerName="keystone-cron" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.608471 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="033cc9db-2d87-48a6-8854-4d3a922a38d2" containerName="keystone-cron" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.608655 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="033cc9db-2d87-48a6-8854-4d3a922a38d2" containerName="keystone-cron" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.610004 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.623436 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcmr2"] Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.737006 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-catalog-content\") pod \"certified-operators-xcmr2\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.737183 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-utilities\") pod \"certified-operators-xcmr2\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.737476 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlkcv\" (UniqueName: \"kubernetes.io/projected/3f830c27-7555-43b2-a77d-e6bc05150b6e-kube-api-access-tlkcv\") pod \"certified-operators-xcmr2\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.839190 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-catalog-content\") pod \"certified-operators-xcmr2\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.839253 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-utilities\") pod \"certified-operators-xcmr2\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.839364 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlkcv\" (UniqueName: \"kubernetes.io/projected/3f830c27-7555-43b2-a77d-e6bc05150b6e-kube-api-access-tlkcv\") pod \"certified-operators-xcmr2\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.839933 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-catalog-content\") pod \"certified-operators-xcmr2\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.840461 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-utilities\") pod \"certified-operators-xcmr2\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.868025 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlkcv\" (UniqueName: \"kubernetes.io/projected/3f830c27-7555-43b2-a77d-e6bc05150b6e-kube-api-access-tlkcv\") pod \"certified-operators-xcmr2\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:38 crc kubenswrapper[5012]: I0219 07:01:38.948437 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:39 crc kubenswrapper[5012]: I0219 07:01:39.450042 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcmr2"] Feb 19 07:01:39 crc kubenswrapper[5012]: I0219 07:01:39.470275 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcmr2" event={"ID":"3f830c27-7555-43b2-a77d-e6bc05150b6e","Type":"ContainerStarted","Data":"017c2066f6389ee51fc586037fd17ca7f470bb393e6d0c4c4927f4cae8cf8d41"} Feb 19 07:01:40 crc kubenswrapper[5012]: I0219 07:01:40.491177 5012 generic.go:334] "Generic (PLEG): container finished" podID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerID="f41315fc3d6f34513f97bf91d9cdd999354b5e17f3bc12e24f6931f5e2179b3e" exitCode=0 Feb 19 07:01:40 crc kubenswrapper[5012]: I0219 07:01:40.491269 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcmr2" event={"ID":"3f830c27-7555-43b2-a77d-e6bc05150b6e","Type":"ContainerDied","Data":"f41315fc3d6f34513f97bf91d9cdd999354b5e17f3bc12e24f6931f5e2179b3e"} Feb 19 07:01:40 crc kubenswrapper[5012]: I0219 07:01:40.496955 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 07:01:41 crc kubenswrapper[5012]: I0219 07:01:41.506239 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcmr2" event={"ID":"3f830c27-7555-43b2-a77d-e6bc05150b6e","Type":"ContainerStarted","Data":"c33e2694d77366551c98e8486a7d8298bcc8806ee0a8ca992cf2d7fa38d1bc81"} Feb 19 07:01:43 crc kubenswrapper[5012]: I0219 07:01:43.533368 5012 generic.go:334] "Generic (PLEG): container finished" podID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerID="c33e2694d77366551c98e8486a7d8298bcc8806ee0a8ca992cf2d7fa38d1bc81" exitCode=0 Feb 19 07:01:43 crc kubenswrapper[5012]: I0219 07:01:43.533380 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcmr2" event={"ID":"3f830c27-7555-43b2-a77d-e6bc05150b6e","Type":"ContainerDied","Data":"c33e2694d77366551c98e8486a7d8298bcc8806ee0a8ca992cf2d7fa38d1bc81"} Feb 19 07:01:43 crc kubenswrapper[5012]: I0219 07:01:43.704148 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:01:43 crc kubenswrapper[5012]: E0219 07:01:43.704781 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:01:44 crc kubenswrapper[5012]: I0219 07:01:44.560987 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcmr2" event={"ID":"3f830c27-7555-43b2-a77d-e6bc05150b6e","Type":"ContainerStarted","Data":"8e2d95b56164517e67558645952cafad9c0eebeb6bee0e8a92fd831955e50d23"} Feb 19 07:01:44 crc kubenswrapper[5012]: I0219 07:01:44.588334 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xcmr2" podStartSLOduration=3.168040548 podStartE2EDuration="6.588309123s" podCreationTimestamp="2026-02-19 07:01:38 +0000 UTC" firstStartedPulling="2026-02-19 07:01:40.496290364 +0000 UTC m=+5796.529612973" lastFinishedPulling="2026-02-19 07:01:43.916558959 +0000 UTC m=+5799.949881548" observedRunningTime="2026-02-19 07:01:44.585258949 +0000 UTC m=+5800.618581518" watchObservedRunningTime="2026-02-19 07:01:44.588309123 +0000 UTC m=+5800.621631692" Feb 19 07:01:48 crc kubenswrapper[5012]: I0219 07:01:48.949373 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:48 crc kubenswrapper[5012]: I0219 07:01:48.950656 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:49 crc kubenswrapper[5012]: I0219 07:01:49.012575 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:49 crc kubenswrapper[5012]: I0219 07:01:49.709692 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:50 crc kubenswrapper[5012]: I0219 07:01:50.268397 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xcmr2"] Feb 19 07:01:51 crc kubenswrapper[5012]: I0219 07:01:51.647907 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xcmr2" podUID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerName="registry-server" containerID="cri-o://8e2d95b56164517e67558645952cafad9c0eebeb6bee0e8a92fd831955e50d23" gracePeriod=2 Feb 19 07:01:52 crc kubenswrapper[5012]: I0219 07:01:52.660166 5012 generic.go:334] "Generic (PLEG): container finished" podID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerID="8e2d95b56164517e67558645952cafad9c0eebeb6bee0e8a92fd831955e50d23" exitCode=0 Feb 19 07:01:52 crc kubenswrapper[5012]: I0219 07:01:52.660289 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcmr2" event={"ID":"3f830c27-7555-43b2-a77d-e6bc05150b6e","Type":"ContainerDied","Data":"8e2d95b56164517e67558645952cafad9c0eebeb6bee0e8a92fd831955e50d23"} Feb 19 07:01:52 crc kubenswrapper[5012]: I0219 07:01:52.823862 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:52 crc kubenswrapper[5012]: I0219 07:01:52.953382 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-catalog-content\") pod \"3f830c27-7555-43b2-a77d-e6bc05150b6e\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " Feb 19 07:01:52 crc kubenswrapper[5012]: I0219 07:01:52.953473 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-utilities\") pod \"3f830c27-7555-43b2-a77d-e6bc05150b6e\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " Feb 19 07:01:52 crc kubenswrapper[5012]: I0219 07:01:52.953516 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlkcv\" (UniqueName: \"kubernetes.io/projected/3f830c27-7555-43b2-a77d-e6bc05150b6e-kube-api-access-tlkcv\") pod \"3f830c27-7555-43b2-a77d-e6bc05150b6e\" (UID: \"3f830c27-7555-43b2-a77d-e6bc05150b6e\") " Feb 19 07:01:52 crc kubenswrapper[5012]: I0219 07:01:52.954825 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-utilities" (OuterVolumeSpecName: "utilities") pod "3f830c27-7555-43b2-a77d-e6bc05150b6e" (UID: "3f830c27-7555-43b2-a77d-e6bc05150b6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:01:52 crc kubenswrapper[5012]: I0219 07:01:52.959413 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f830c27-7555-43b2-a77d-e6bc05150b6e-kube-api-access-tlkcv" (OuterVolumeSpecName: "kube-api-access-tlkcv") pod "3f830c27-7555-43b2-a77d-e6bc05150b6e" (UID: "3f830c27-7555-43b2-a77d-e6bc05150b6e"). InnerVolumeSpecName "kube-api-access-tlkcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:01:53 crc kubenswrapper[5012]: I0219 07:01:53.008796 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f830c27-7555-43b2-a77d-e6bc05150b6e" (UID: "3f830c27-7555-43b2-a77d-e6bc05150b6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:01:53 crc kubenswrapper[5012]: I0219 07:01:53.056663 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 07:01:53 crc kubenswrapper[5012]: I0219 07:01:53.056715 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f830c27-7555-43b2-a77d-e6bc05150b6e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 07:01:53 crc kubenswrapper[5012]: I0219 07:01:53.056735 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlkcv\" (UniqueName: \"kubernetes.io/projected/3f830c27-7555-43b2-a77d-e6bc05150b6e-kube-api-access-tlkcv\") on node \"crc\" DevicePath \"\"" Feb 19 07:01:53 crc kubenswrapper[5012]: I0219 07:01:53.676734 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcmr2" event={"ID":"3f830c27-7555-43b2-a77d-e6bc05150b6e","Type":"ContainerDied","Data":"017c2066f6389ee51fc586037fd17ca7f470bb393e6d0c4c4927f4cae8cf8d41"} Feb 19 07:01:53 crc kubenswrapper[5012]: I0219 07:01:53.677147 5012 scope.go:117] "RemoveContainer" containerID="8e2d95b56164517e67558645952cafad9c0eebeb6bee0e8a92fd831955e50d23" Feb 19 07:01:53 crc kubenswrapper[5012]: I0219 07:01:53.677445 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcmr2" Feb 19 07:01:53 crc kubenswrapper[5012]: I0219 07:01:53.706106 5012 scope.go:117] "RemoveContainer" containerID="c33e2694d77366551c98e8486a7d8298bcc8806ee0a8ca992cf2d7fa38d1bc81" Feb 19 07:01:53 crc kubenswrapper[5012]: I0219 07:01:53.740830 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xcmr2"] Feb 19 07:01:53 crc kubenswrapper[5012]: I0219 07:01:53.755014 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xcmr2"] Feb 19 07:01:54 crc kubenswrapper[5012]: I0219 07:01:54.653890 5012 scope.go:117] "RemoveContainer" containerID="f41315fc3d6f34513f97bf91d9cdd999354b5e17f3bc12e24f6931f5e2179b3e" Feb 19 07:01:54 crc kubenswrapper[5012]: I0219 07:01:54.729953 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f830c27-7555-43b2-a77d-e6bc05150b6e" path="/var/lib/kubelet/pods/3f830c27-7555-43b2-a77d-e6bc05150b6e/volumes" Feb 19 07:01:58 crc kubenswrapper[5012]: I0219 07:01:58.703671 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:01:58 crc kubenswrapper[5012]: E0219 07:01:58.704986 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.724113 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8nbsb/must-gather-znn9c"] Feb 19 07:02:10 crc kubenswrapper[5012]: E0219 07:02:10.725248 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerName="registry-server" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.725262 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerName="registry-server" Feb 19 07:02:10 crc kubenswrapper[5012]: E0219 07:02:10.725275 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerName="extract-utilities" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.725281 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerName="extract-utilities" Feb 19 07:02:10 crc kubenswrapper[5012]: E0219 07:02:10.725314 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerName="extract-content" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.725321 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerName="extract-content" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.725496 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f830c27-7555-43b2-a77d-e6bc05150b6e" containerName="registry-server" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.726510 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/must-gather-znn9c" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.728791 5012 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-8nbsb"/"default-dockercfg-5pvfq" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.728793 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8nbsb"/"openshift-service-ca.crt" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.730400 5012 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8nbsb"/"kube-root-ca.crt" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.776548 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/91bc1236-3737-44f8-a82a-35044bd3258b-must-gather-output\") pod \"must-gather-znn9c\" (UID: \"91bc1236-3737-44f8-a82a-35044bd3258b\") " pod="openshift-must-gather-8nbsb/must-gather-znn9c" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.777026 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh8mp\" (UniqueName: \"kubernetes.io/projected/91bc1236-3737-44f8-a82a-35044bd3258b-kube-api-access-fh8mp\") pod \"must-gather-znn9c\" (UID: \"91bc1236-3737-44f8-a82a-35044bd3258b\") " pod="openshift-must-gather-8nbsb/must-gather-znn9c" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.821952 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8nbsb/must-gather-znn9c"] Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.878543 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/91bc1236-3737-44f8-a82a-35044bd3258b-must-gather-output\") pod \"must-gather-znn9c\" (UID: \"91bc1236-3737-44f8-a82a-35044bd3258b\") " pod="openshift-must-gather-8nbsb/must-gather-znn9c" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.878651 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh8mp\" (UniqueName: \"kubernetes.io/projected/91bc1236-3737-44f8-a82a-35044bd3258b-kube-api-access-fh8mp\") pod \"must-gather-znn9c\" (UID: \"91bc1236-3737-44f8-a82a-35044bd3258b\") " pod="openshift-must-gather-8nbsb/must-gather-znn9c" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.879015 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/91bc1236-3737-44f8-a82a-35044bd3258b-must-gather-output\") pod \"must-gather-znn9c\" (UID: \"91bc1236-3737-44f8-a82a-35044bd3258b\") " pod="openshift-must-gather-8nbsb/must-gather-znn9c" Feb 19 07:02:10 crc kubenswrapper[5012]: I0219 07:02:10.897175 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh8mp\" (UniqueName: \"kubernetes.io/projected/91bc1236-3737-44f8-a82a-35044bd3258b-kube-api-access-fh8mp\") pod \"must-gather-znn9c\" (UID: \"91bc1236-3737-44f8-a82a-35044bd3258b\") " pod="openshift-must-gather-8nbsb/must-gather-znn9c" Feb 19 07:02:11 crc kubenswrapper[5012]: I0219 07:02:11.049327 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/must-gather-znn9c" Feb 19 07:02:11 crc kubenswrapper[5012]: I0219 07:02:11.578257 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8nbsb/must-gather-znn9c"] Feb 19 07:02:11 crc kubenswrapper[5012]: I0219 07:02:11.950516 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/must-gather-znn9c" event={"ID":"91bc1236-3737-44f8-a82a-35044bd3258b","Type":"ContainerStarted","Data":"746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251"} Feb 19 07:02:11 crc kubenswrapper[5012]: I0219 07:02:11.950919 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/must-gather-znn9c" event={"ID":"91bc1236-3737-44f8-a82a-35044bd3258b","Type":"ContainerStarted","Data":"156e99b97364e87cddf166ac671f99b3f88230e0a4aeb448abb7212f6a34076e"} Feb 19 07:02:12 crc kubenswrapper[5012]: I0219 07:02:12.706996 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:02:12 crc kubenswrapper[5012]: E0219 07:02:12.708223 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:02:12 crc kubenswrapper[5012]: I0219 07:02:12.967510 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/must-gather-znn9c" event={"ID":"91bc1236-3737-44f8-a82a-35044bd3258b","Type":"ContainerStarted","Data":"e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d"} Feb 19 07:02:13 crc kubenswrapper[5012]: I0219 07:02:13.000671 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8nbsb/must-gather-znn9c" podStartSLOduration=3.000653037 podStartE2EDuration="3.000653037s" podCreationTimestamp="2026-02-19 07:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 07:02:12.995575044 +0000 UTC m=+5829.028897623" watchObservedRunningTime="2026-02-19 07:02:13.000653037 +0000 UTC m=+5829.033975616" Feb 19 07:02:16 crc kubenswrapper[5012]: I0219 07:02:16.117452 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8nbsb/crc-debug-t2lv8"] Feb 19 07:02:16 crc kubenswrapper[5012]: I0219 07:02:16.122346 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" Feb 19 07:02:16 crc kubenswrapper[5012]: I0219 07:02:16.195377 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-host\") pod \"crc-debug-t2lv8\" (UID: \"6087189c-c5d3-4586-95a7-3d7cfd01b5f2\") " pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" Feb 19 07:02:16 crc kubenswrapper[5012]: I0219 07:02:16.195571 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdxrl\" (UniqueName: \"kubernetes.io/projected/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-kube-api-access-zdxrl\") pod \"crc-debug-t2lv8\" (UID: \"6087189c-c5d3-4586-95a7-3d7cfd01b5f2\") " pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" Feb 19 07:02:16 crc kubenswrapper[5012]: I0219 07:02:16.297736 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-host\") pod \"crc-debug-t2lv8\" (UID: \"6087189c-c5d3-4586-95a7-3d7cfd01b5f2\") " pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" Feb 19 07:02:16 crc kubenswrapper[5012]: I0219 07:02:16.297843 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxrl\" (UniqueName: \"kubernetes.io/projected/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-kube-api-access-zdxrl\") pod \"crc-debug-t2lv8\" (UID: \"6087189c-c5d3-4586-95a7-3d7cfd01b5f2\") " pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" Feb 19 07:02:16 crc kubenswrapper[5012]: I0219 07:02:16.297894 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-host\") pod \"crc-debug-t2lv8\" (UID: \"6087189c-c5d3-4586-95a7-3d7cfd01b5f2\") " pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" Feb 19 07:02:16 crc kubenswrapper[5012]: I0219 07:02:16.320104 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdxrl\" (UniqueName: \"kubernetes.io/projected/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-kube-api-access-zdxrl\") pod \"crc-debug-t2lv8\" (UID: \"6087189c-c5d3-4586-95a7-3d7cfd01b5f2\") " pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" Feb 19 07:02:16 crc kubenswrapper[5012]: I0219 07:02:16.441398 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" Feb 19 07:02:17 crc kubenswrapper[5012]: I0219 07:02:17.046699 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" event={"ID":"6087189c-c5d3-4586-95a7-3d7cfd01b5f2","Type":"ContainerStarted","Data":"8925602fab983c962e968ddbebc86a948cd3945bd659b4613398d3aca81b02b2"} Feb 19 07:02:17 crc kubenswrapper[5012]: I0219 07:02:17.047002 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" event={"ID":"6087189c-c5d3-4586-95a7-3d7cfd01b5f2","Type":"ContainerStarted","Data":"2fb250eca206a6fe964f945abc50fa70f4f03b4ced584e3c47672f73313f6c80"} Feb 19 07:02:17 crc kubenswrapper[5012]: I0219 07:02:17.071814 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" podStartSLOduration=1.071797818 podStartE2EDuration="1.071797818s" podCreationTimestamp="2026-02-19 07:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 07:02:17.063762673 +0000 UTC m=+5833.097085242" watchObservedRunningTime="2026-02-19 07:02:17.071797818 +0000 UTC m=+5833.105120387" Feb 19 07:02:25 crc kubenswrapper[5012]: I0219 07:02:25.704222 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:02:25 crc kubenswrapper[5012]: E0219 07:02:25.707405 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:02:37 crc kubenswrapper[5012]: I0219 07:02:37.703044 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:02:37 crc kubenswrapper[5012]: E0219 07:02:37.704232 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.046672 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zwzdr"] Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.069027 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zwzdr"] Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.069601 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.136872 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-utilities\") pod \"redhat-operators-zwzdr\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.136931 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmjm9\" (UniqueName: \"kubernetes.io/projected/f1690cd8-3b2d-461b-810a-4958ef591f15-kube-api-access-cmjm9\") pod \"redhat-operators-zwzdr\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.136968 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-catalog-content\") pod \"redhat-operators-zwzdr\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.240951 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-utilities\") pod \"redhat-operators-zwzdr\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.241091 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmjm9\" (UniqueName: \"kubernetes.io/projected/f1690cd8-3b2d-461b-810a-4958ef591f15-kube-api-access-cmjm9\") pod \"redhat-operators-zwzdr\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.241187 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-catalog-content\") pod \"redhat-operators-zwzdr\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.242859 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-utilities\") pod \"redhat-operators-zwzdr\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.242917 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-catalog-content\") pod \"redhat-operators-zwzdr\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.275190 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmjm9\" (UniqueName: \"kubernetes.io/projected/f1690cd8-3b2d-461b-810a-4958ef591f15-kube-api-access-cmjm9\") pod \"redhat-operators-zwzdr\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.408769 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:49 crc kubenswrapper[5012]: I0219 07:02:49.885997 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zwzdr"] Feb 19 07:02:50 crc kubenswrapper[5012]: I0219 07:02:50.365892 5012 generic.go:334] "Generic (PLEG): container finished" podID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerID="2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8" exitCode=0 Feb 19 07:02:50 crc kubenswrapper[5012]: I0219 07:02:50.365990 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwzdr" event={"ID":"f1690cd8-3b2d-461b-810a-4958ef591f15","Type":"ContainerDied","Data":"2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8"} Feb 19 07:02:50 crc kubenswrapper[5012]: I0219 07:02:50.366152 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwzdr" event={"ID":"f1690cd8-3b2d-461b-810a-4958ef591f15","Type":"ContainerStarted","Data":"dfe2599a1e23379af3070d43b629a6fe3b0a2d40d5bdd99f900388c40aebed26"} Feb 19 07:02:51 crc kubenswrapper[5012]: I0219 07:02:51.704064 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:02:51 crc kubenswrapper[5012]: E0219 07:02:51.704608 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:02:52 crc kubenswrapper[5012]: I0219 07:02:52.384878 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwzdr" event={"ID":"f1690cd8-3b2d-461b-810a-4958ef591f15","Type":"ContainerStarted","Data":"82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75"} Feb 19 07:02:55 crc kubenswrapper[5012]: I0219 07:02:55.417972 5012 generic.go:334] "Generic (PLEG): container finished" podID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerID="82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75" exitCode=0 Feb 19 07:02:55 crc kubenswrapper[5012]: I0219 07:02:55.418058 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwzdr" event={"ID":"f1690cd8-3b2d-461b-810a-4958ef591f15","Type":"ContainerDied","Data":"82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75"} Feb 19 07:02:56 crc kubenswrapper[5012]: I0219 07:02:56.432880 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwzdr" event={"ID":"f1690cd8-3b2d-461b-810a-4958ef591f15","Type":"ContainerStarted","Data":"5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2"} Feb 19 07:02:56 crc kubenswrapper[5012]: I0219 07:02:56.434826 5012 generic.go:334] "Generic (PLEG): container finished" podID="6087189c-c5d3-4586-95a7-3d7cfd01b5f2" containerID="8925602fab983c962e968ddbebc86a948cd3945bd659b4613398d3aca81b02b2" exitCode=0 Feb 19 07:02:56 crc kubenswrapper[5012]: I0219 07:02:56.434876 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" event={"ID":"6087189c-c5d3-4586-95a7-3d7cfd01b5f2","Type":"ContainerDied","Data":"8925602fab983c962e968ddbebc86a948cd3945bd659b4613398d3aca81b02b2"} Feb 19 07:02:56 crc kubenswrapper[5012]: I0219 07:02:56.459416 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zwzdr" podStartSLOduration=2.004872331 podStartE2EDuration="7.459387939s" podCreationTimestamp="2026-02-19 07:02:49 +0000 UTC" firstStartedPulling="2026-02-19 07:02:50.367806051 +0000 UTC m=+5866.401128620" lastFinishedPulling="2026-02-19 07:02:55.822321659 +0000 UTC m=+5871.855644228" observedRunningTime="2026-02-19 07:02:56.44956118 +0000 UTC m=+5872.482883789" watchObservedRunningTime="2026-02-19 07:02:56.459387939 +0000 UTC m=+5872.492710518" Feb 19 07:02:57 crc kubenswrapper[5012]: I0219 07:02:57.573432 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" Feb 19 07:02:57 crc kubenswrapper[5012]: I0219 07:02:57.614286 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8nbsb/crc-debug-t2lv8"] Feb 19 07:02:57 crc kubenswrapper[5012]: I0219 07:02:57.626188 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8nbsb/crc-debug-t2lv8"] Feb 19 07:02:57 crc kubenswrapper[5012]: I0219 07:02:57.708051 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdxrl\" (UniqueName: \"kubernetes.io/projected/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-kube-api-access-zdxrl\") pod \"6087189c-c5d3-4586-95a7-3d7cfd01b5f2\" (UID: \"6087189c-c5d3-4586-95a7-3d7cfd01b5f2\") " Feb 19 07:02:57 crc kubenswrapper[5012]: I0219 07:02:57.708319 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-host\") pod \"6087189c-c5d3-4586-95a7-3d7cfd01b5f2\" (UID: \"6087189c-c5d3-4586-95a7-3d7cfd01b5f2\") " Feb 19 07:02:57 crc kubenswrapper[5012]: I0219 07:02:57.708387 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-host" (OuterVolumeSpecName: "host") pod "6087189c-c5d3-4586-95a7-3d7cfd01b5f2" (UID: "6087189c-c5d3-4586-95a7-3d7cfd01b5f2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 07:02:57 crc kubenswrapper[5012]: I0219 07:02:57.708788 5012 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-host\") on node \"crc\" DevicePath \"\"" Feb 19 07:02:57 crc kubenswrapper[5012]: I0219 07:02:57.715267 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-kube-api-access-zdxrl" (OuterVolumeSpecName: "kube-api-access-zdxrl") pod "6087189c-c5d3-4586-95a7-3d7cfd01b5f2" (UID: "6087189c-c5d3-4586-95a7-3d7cfd01b5f2"). InnerVolumeSpecName "kube-api-access-zdxrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:02:57 crc kubenswrapper[5012]: I0219 07:02:57.813646 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdxrl\" (UniqueName: \"kubernetes.io/projected/6087189c-c5d3-4586-95a7-3d7cfd01b5f2-kube-api-access-zdxrl\") on node \"crc\" DevicePath \"\"" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.454056 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fb250eca206a6fe964f945abc50fa70f4f03b4ced584e3c47672f73313f6c80" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.454143 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-t2lv8" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.715990 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6087189c-c5d3-4586-95a7-3d7cfd01b5f2" path="/var/lib/kubelet/pods/6087189c-c5d3-4586-95a7-3d7cfd01b5f2/volumes" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.826990 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8nbsb/crc-debug-54sj9"] Feb 19 07:02:58 crc kubenswrapper[5012]: E0219 07:02:58.827450 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6087189c-c5d3-4586-95a7-3d7cfd01b5f2" containerName="container-00" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.827470 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="6087189c-c5d3-4586-95a7-3d7cfd01b5f2" containerName="container-00" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.827727 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="6087189c-c5d3-4586-95a7-3d7cfd01b5f2" containerName="container-00" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.828561 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-54sj9" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.875832 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad8e11d7-43c8-4590-b524-64c0ca3a440b-host\") pod \"crc-debug-54sj9\" (UID: \"ad8e11d7-43c8-4590-b524-64c0ca3a440b\") " pod="openshift-must-gather-8nbsb/crc-debug-54sj9" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.876195 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf5kx\" (UniqueName: \"kubernetes.io/projected/ad8e11d7-43c8-4590-b524-64c0ca3a440b-kube-api-access-kf5kx\") pod \"crc-debug-54sj9\" (UID: \"ad8e11d7-43c8-4590-b524-64c0ca3a440b\") " pod="openshift-must-gather-8nbsb/crc-debug-54sj9" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.978394 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad8e11d7-43c8-4590-b524-64c0ca3a440b-host\") pod \"crc-debug-54sj9\" (UID: \"ad8e11d7-43c8-4590-b524-64c0ca3a440b\") " pod="openshift-must-gather-8nbsb/crc-debug-54sj9" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.978510 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf5kx\" (UniqueName: \"kubernetes.io/projected/ad8e11d7-43c8-4590-b524-64c0ca3a440b-kube-api-access-kf5kx\") pod \"crc-debug-54sj9\" (UID: \"ad8e11d7-43c8-4590-b524-64c0ca3a440b\") " pod="openshift-must-gather-8nbsb/crc-debug-54sj9" Feb 19 07:02:58 crc kubenswrapper[5012]: I0219 07:02:58.978610 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad8e11d7-43c8-4590-b524-64c0ca3a440b-host\") pod \"crc-debug-54sj9\" (UID: \"ad8e11d7-43c8-4590-b524-64c0ca3a440b\") " pod="openshift-must-gather-8nbsb/crc-debug-54sj9" Feb 19 07:02:59 crc kubenswrapper[5012]: I0219 07:02:59.003976 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf5kx\" (UniqueName: \"kubernetes.io/projected/ad8e11d7-43c8-4590-b524-64c0ca3a440b-kube-api-access-kf5kx\") pod \"crc-debug-54sj9\" (UID: \"ad8e11d7-43c8-4590-b524-64c0ca3a440b\") " pod="openshift-must-gather-8nbsb/crc-debug-54sj9" Feb 19 07:02:59 crc kubenswrapper[5012]: I0219 07:02:59.143544 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-54sj9" Feb 19 07:02:59 crc kubenswrapper[5012]: W0219 07:02:59.170414 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad8e11d7_43c8_4590_b524_64c0ca3a440b.slice/crio-e47b2defded8c82f6737a3985228733855b0135be026346719aedc589abd242d WatchSource:0}: Error finding container e47b2defded8c82f6737a3985228733855b0135be026346719aedc589abd242d: Status 404 returned error can't find the container with id e47b2defded8c82f6737a3985228733855b0135be026346719aedc589abd242d Feb 19 07:02:59 crc kubenswrapper[5012]: I0219 07:02:59.410022 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:59 crc kubenswrapper[5012]: I0219 07:02:59.410518 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:02:59 crc kubenswrapper[5012]: I0219 07:02:59.465125 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/crc-debug-54sj9" event={"ID":"ad8e11d7-43c8-4590-b524-64c0ca3a440b","Type":"ContainerStarted","Data":"783537a7e84f3b0ed638f3eb6a2789d1dd27811c0584c5d95f222e682776f22b"} Feb 19 07:02:59 crc kubenswrapper[5012]: I0219 07:02:59.465197 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/crc-debug-54sj9" event={"ID":"ad8e11d7-43c8-4590-b524-64c0ca3a440b","Type":"ContainerStarted","Data":"e47b2defded8c82f6737a3985228733855b0135be026346719aedc589abd242d"} Feb 19 07:02:59 crc kubenswrapper[5012]: I0219 07:02:59.492289 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8nbsb/crc-debug-54sj9" podStartSLOduration=1.4922652539999999 podStartE2EDuration="1.492265254s" podCreationTimestamp="2026-02-19 07:02:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 07:02:59.47727949 +0000 UTC m=+5875.510602069" watchObservedRunningTime="2026-02-19 07:02:59.492265254 +0000 UTC m=+5875.525587853" Feb 19 07:03:00 crc kubenswrapper[5012]: I0219 07:03:00.473089 5012 generic.go:334] "Generic (PLEG): container finished" podID="ad8e11d7-43c8-4590-b524-64c0ca3a440b" containerID="783537a7e84f3b0ed638f3eb6a2789d1dd27811c0584c5d95f222e682776f22b" exitCode=0 Feb 19 07:03:00 crc kubenswrapper[5012]: I0219 07:03:00.473128 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/crc-debug-54sj9" event={"ID":"ad8e11d7-43c8-4590-b524-64c0ca3a440b","Type":"ContainerDied","Data":"783537a7e84f3b0ed638f3eb6a2789d1dd27811c0584c5d95f222e682776f22b"} Feb 19 07:03:00 crc kubenswrapper[5012]: I0219 07:03:00.498981 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zwzdr" podUID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerName="registry-server" probeResult="failure" output=< Feb 19 07:03:00 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 07:03:00 crc kubenswrapper[5012]: > Feb 19 07:03:01 crc kubenswrapper[5012]: I0219 07:03:01.578025 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-54sj9" Feb 19 07:03:01 crc kubenswrapper[5012]: I0219 07:03:01.641553 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8nbsb/crc-debug-54sj9"] Feb 19 07:03:01 crc kubenswrapper[5012]: I0219 07:03:01.650644 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8nbsb/crc-debug-54sj9"] Feb 19 07:03:01 crc kubenswrapper[5012]: I0219 07:03:01.721845 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf5kx\" (UniqueName: \"kubernetes.io/projected/ad8e11d7-43c8-4590-b524-64c0ca3a440b-kube-api-access-kf5kx\") pod \"ad8e11d7-43c8-4590-b524-64c0ca3a440b\" (UID: \"ad8e11d7-43c8-4590-b524-64c0ca3a440b\") " Feb 19 07:03:01 crc kubenswrapper[5012]: I0219 07:03:01.721888 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad8e11d7-43c8-4590-b524-64c0ca3a440b-host\") pod \"ad8e11d7-43c8-4590-b524-64c0ca3a440b\" (UID: \"ad8e11d7-43c8-4590-b524-64c0ca3a440b\") " Feb 19 07:03:01 crc kubenswrapper[5012]: I0219 07:03:01.722461 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad8e11d7-43c8-4590-b524-64c0ca3a440b-host" (OuterVolumeSpecName: "host") pod "ad8e11d7-43c8-4590-b524-64c0ca3a440b" (UID: "ad8e11d7-43c8-4590-b524-64c0ca3a440b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 07:03:01 crc kubenswrapper[5012]: I0219 07:03:01.728514 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad8e11d7-43c8-4590-b524-64c0ca3a440b-kube-api-access-kf5kx" (OuterVolumeSpecName: "kube-api-access-kf5kx") pod "ad8e11d7-43c8-4590-b524-64c0ca3a440b" (UID: "ad8e11d7-43c8-4590-b524-64c0ca3a440b"). InnerVolumeSpecName "kube-api-access-kf5kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:03:01 crc kubenswrapper[5012]: I0219 07:03:01.824605 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf5kx\" (UniqueName: \"kubernetes.io/projected/ad8e11d7-43c8-4590-b524-64c0ca3a440b-kube-api-access-kf5kx\") on node \"crc\" DevicePath \"\"" Feb 19 07:03:01 crc kubenswrapper[5012]: I0219 07:03:01.824895 5012 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad8e11d7-43c8-4590-b524-64c0ca3a440b-host\") on node \"crc\" DevicePath \"\"" Feb 19 07:03:02 crc kubenswrapper[5012]: I0219 07:03:02.489412 5012 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e47b2defded8c82f6737a3985228733855b0135be026346719aedc589abd242d" Feb 19 07:03:02 crc kubenswrapper[5012]: I0219 07:03:02.489459 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-54sj9" Feb 19 07:03:02 crc kubenswrapper[5012]: I0219 07:03:02.724988 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad8e11d7-43c8-4590-b524-64c0ca3a440b" path="/var/lib/kubelet/pods/ad8e11d7-43c8-4590-b524-64c0ca3a440b/volumes" Feb 19 07:03:02 crc kubenswrapper[5012]: I0219 07:03:02.869204 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8nbsb/crc-debug-g7dqh"] Feb 19 07:03:02 crc kubenswrapper[5012]: E0219 07:03:02.869876 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad8e11d7-43c8-4590-b524-64c0ca3a440b" containerName="container-00" Feb 19 07:03:02 crc kubenswrapper[5012]: I0219 07:03:02.869948 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad8e11d7-43c8-4590-b524-64c0ca3a440b" containerName="container-00" Feb 19 07:03:02 crc kubenswrapper[5012]: I0219 07:03:02.870214 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad8e11d7-43c8-4590-b524-64c0ca3a440b" containerName="container-00" Feb 19 07:03:02 crc kubenswrapper[5012]: I0219 07:03:02.870906 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" Feb 19 07:03:02 crc kubenswrapper[5012]: I0219 07:03:02.949720 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/216e052a-9145-4dba-a625-f9262c5f27cb-host\") pod \"crc-debug-g7dqh\" (UID: \"216e052a-9145-4dba-a625-f9262c5f27cb\") " pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" Feb 19 07:03:02 crc kubenswrapper[5012]: I0219 07:03:02.949996 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtmj8\" (UniqueName: \"kubernetes.io/projected/216e052a-9145-4dba-a625-f9262c5f27cb-kube-api-access-gtmj8\") pod \"crc-debug-g7dqh\" (UID: \"216e052a-9145-4dba-a625-f9262c5f27cb\") " pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" Feb 19 07:03:03 crc kubenswrapper[5012]: I0219 07:03:03.051824 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/216e052a-9145-4dba-a625-f9262c5f27cb-host\") pod \"crc-debug-g7dqh\" (UID: \"216e052a-9145-4dba-a625-f9262c5f27cb\") " pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" Feb 19 07:03:03 crc kubenswrapper[5012]: I0219 07:03:03.052156 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtmj8\" (UniqueName: \"kubernetes.io/projected/216e052a-9145-4dba-a625-f9262c5f27cb-kube-api-access-gtmj8\") pod \"crc-debug-g7dqh\" (UID: \"216e052a-9145-4dba-a625-f9262c5f27cb\") " pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" Feb 19 07:03:03 crc kubenswrapper[5012]: I0219 07:03:03.052593 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/216e052a-9145-4dba-a625-f9262c5f27cb-host\") pod \"crc-debug-g7dqh\" (UID: \"216e052a-9145-4dba-a625-f9262c5f27cb\") " pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" Feb 19 07:03:03 crc kubenswrapper[5012]: I0219 07:03:03.096448 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtmj8\" (UniqueName: \"kubernetes.io/projected/216e052a-9145-4dba-a625-f9262c5f27cb-kube-api-access-gtmj8\") pod \"crc-debug-g7dqh\" (UID: \"216e052a-9145-4dba-a625-f9262c5f27cb\") " pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" Feb 19 07:03:03 crc kubenswrapper[5012]: I0219 07:03:03.195627 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" Feb 19 07:03:03 crc kubenswrapper[5012]: W0219 07:03:03.242140 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod216e052a_9145_4dba_a625_f9262c5f27cb.slice/crio-c23e6e4d668bfa0ca931eea2640f45e2f115364725ac474ee87b528a8fd5124e WatchSource:0}: Error finding container c23e6e4d668bfa0ca931eea2640f45e2f115364725ac474ee87b528a8fd5124e: Status 404 returned error can't find the container with id c23e6e4d668bfa0ca931eea2640f45e2f115364725ac474ee87b528a8fd5124e Feb 19 07:03:03 crc kubenswrapper[5012]: I0219 07:03:03.514251 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" event={"ID":"216e052a-9145-4dba-a625-f9262c5f27cb","Type":"ContainerStarted","Data":"c23e6e4d668bfa0ca931eea2640f45e2f115364725ac474ee87b528a8fd5124e"} Feb 19 07:03:04 crc kubenswrapper[5012]: I0219 07:03:04.523034 5012 generic.go:334] "Generic (PLEG): container finished" podID="216e052a-9145-4dba-a625-f9262c5f27cb" containerID="a9b96cbca646ccd46816aca3328f7acd269a5cdafce9ec48010be765ddb2c162" exitCode=0 Feb 19 07:03:04 crc kubenswrapper[5012]: I0219 07:03:04.523109 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" event={"ID":"216e052a-9145-4dba-a625-f9262c5f27cb","Type":"ContainerDied","Data":"a9b96cbca646ccd46816aca3328f7acd269a5cdafce9ec48010be765ddb2c162"} Feb 19 07:03:04 crc kubenswrapper[5012]: I0219 07:03:04.561719 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8nbsb/crc-debug-g7dqh"] Feb 19 07:03:04 crc kubenswrapper[5012]: I0219 07:03:04.573370 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8nbsb/crc-debug-g7dqh"] Feb 19 07:03:04 crc kubenswrapper[5012]: I0219 07:03:04.713203 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:03:04 crc kubenswrapper[5012]: E0219 07:03:04.713721 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:03:05 crc kubenswrapper[5012]: I0219 07:03:05.647605 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" Feb 19 07:03:05 crc kubenswrapper[5012]: I0219 07:03:05.807980 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtmj8\" (UniqueName: \"kubernetes.io/projected/216e052a-9145-4dba-a625-f9262c5f27cb-kube-api-access-gtmj8\") pod \"216e052a-9145-4dba-a625-f9262c5f27cb\" (UID: \"216e052a-9145-4dba-a625-f9262c5f27cb\") " Feb 19 07:03:05 crc kubenswrapper[5012]: I0219 07:03:05.808101 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/216e052a-9145-4dba-a625-f9262c5f27cb-host\") pod \"216e052a-9145-4dba-a625-f9262c5f27cb\" (UID: \"216e052a-9145-4dba-a625-f9262c5f27cb\") " Feb 19 07:03:05 crc kubenswrapper[5012]: I0219 07:03:05.808280 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/216e052a-9145-4dba-a625-f9262c5f27cb-host" (OuterVolumeSpecName: "host") pod "216e052a-9145-4dba-a625-f9262c5f27cb" (UID: "216e052a-9145-4dba-a625-f9262c5f27cb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 07:03:05 crc kubenswrapper[5012]: I0219 07:03:05.813968 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216e052a-9145-4dba-a625-f9262c5f27cb-kube-api-access-gtmj8" (OuterVolumeSpecName: "kube-api-access-gtmj8") pod "216e052a-9145-4dba-a625-f9262c5f27cb" (UID: "216e052a-9145-4dba-a625-f9262c5f27cb"). InnerVolumeSpecName "kube-api-access-gtmj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:03:05 crc kubenswrapper[5012]: I0219 07:03:05.910002 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtmj8\" (UniqueName: \"kubernetes.io/projected/216e052a-9145-4dba-a625-f9262c5f27cb-kube-api-access-gtmj8\") on node \"crc\" DevicePath \"\"" Feb 19 07:03:05 crc kubenswrapper[5012]: I0219 07:03:05.910032 5012 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/216e052a-9145-4dba-a625-f9262c5f27cb-host\") on node \"crc\" DevicePath \"\"" Feb 19 07:03:06 crc kubenswrapper[5012]: I0219 07:03:06.543110 5012 scope.go:117] "RemoveContainer" containerID="a9b96cbca646ccd46816aca3328f7acd269a5cdafce9ec48010be765ddb2c162" Feb 19 07:03:06 crc kubenswrapper[5012]: I0219 07:03:06.543257 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/crc-debug-g7dqh" Feb 19 07:03:06 crc kubenswrapper[5012]: I0219 07:03:06.719361 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="216e052a-9145-4dba-a625-f9262c5f27cb" path="/var/lib/kubelet/pods/216e052a-9145-4dba-a625-f9262c5f27cb/volumes" Feb 19 07:03:09 crc kubenswrapper[5012]: I0219 07:03:09.477689 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:03:09 crc kubenswrapper[5012]: I0219 07:03:09.538894 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:03:09 crc kubenswrapper[5012]: I0219 07:03:09.728366 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zwzdr"] Feb 19 07:03:10 crc kubenswrapper[5012]: I0219 07:03:10.582949 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zwzdr" podUID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerName="registry-server" containerID="cri-o://5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2" gracePeriod=2 Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.030127 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.206062 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmjm9\" (UniqueName: \"kubernetes.io/projected/f1690cd8-3b2d-461b-810a-4958ef591f15-kube-api-access-cmjm9\") pod \"f1690cd8-3b2d-461b-810a-4958ef591f15\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.206252 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-utilities\") pod \"f1690cd8-3b2d-461b-810a-4958ef591f15\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.206408 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-catalog-content\") pod \"f1690cd8-3b2d-461b-810a-4958ef591f15\" (UID: \"f1690cd8-3b2d-461b-810a-4958ef591f15\") " Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.206991 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-utilities" (OuterVolumeSpecName: "utilities") pod "f1690cd8-3b2d-461b-810a-4958ef591f15" (UID: "f1690cd8-3b2d-461b-810a-4958ef591f15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.211810 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1690cd8-3b2d-461b-810a-4958ef591f15-kube-api-access-cmjm9" (OuterVolumeSpecName: "kube-api-access-cmjm9") pod "f1690cd8-3b2d-461b-810a-4958ef591f15" (UID: "f1690cd8-3b2d-461b-810a-4958ef591f15"). InnerVolumeSpecName "kube-api-access-cmjm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.310146 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.310185 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmjm9\" (UniqueName: \"kubernetes.io/projected/f1690cd8-3b2d-461b-810a-4958ef591f15-kube-api-access-cmjm9\") on node \"crc\" DevicePath \"\"" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.322883 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1690cd8-3b2d-461b-810a-4958ef591f15" (UID: "f1690cd8-3b2d-461b-810a-4958ef591f15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.411532 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1690cd8-3b2d-461b-810a-4958ef591f15-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.591102 5012 generic.go:334] "Generic (PLEG): container finished" podID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerID="5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2" exitCode=0 Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.591140 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwzdr" event={"ID":"f1690cd8-3b2d-461b-810a-4958ef591f15","Type":"ContainerDied","Data":"5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2"} Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.591155 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zwzdr" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.591171 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zwzdr" event={"ID":"f1690cd8-3b2d-461b-810a-4958ef591f15","Type":"ContainerDied","Data":"dfe2599a1e23379af3070d43b629a6fe3b0a2d40d5bdd99f900388c40aebed26"} Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.591189 5012 scope.go:117] "RemoveContainer" containerID="5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.610014 5012 scope.go:117] "RemoveContainer" containerID="82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.622205 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zwzdr"] Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.629398 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zwzdr"] Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.640887 5012 scope.go:117] "RemoveContainer" containerID="2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.669243 5012 scope.go:117] "RemoveContainer" containerID="5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2" Feb 19 07:03:11 crc kubenswrapper[5012]: E0219 07:03:11.669708 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2\": container with ID starting with 5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2 not found: ID does not exist" containerID="5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.669769 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2"} err="failed to get container status \"5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2\": rpc error: code = NotFound desc = could not find container \"5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2\": container with ID starting with 5ee39d50ed1141155f1a04b6f22feca273451ef8ca763b38a8d3c8994e8abfa2 not found: ID does not exist" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.669797 5012 scope.go:117] "RemoveContainer" containerID="82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75" Feb 19 07:03:11 crc kubenswrapper[5012]: E0219 07:03:11.670189 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75\": container with ID starting with 82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75 not found: ID does not exist" containerID="82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.670219 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75"} err="failed to get container status \"82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75\": rpc error: code = NotFound desc = could not find container \"82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75\": container with ID starting with 82bd22401012e47d0e5207408d70de2abef03b81404d6020c20f58bf5f35ee75 not found: ID does not exist" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.670239 5012 scope.go:117] "RemoveContainer" containerID="2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8" Feb 19 07:03:11 crc kubenswrapper[5012]: E0219 07:03:11.670719 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8\": container with ID starting with 2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8 not found: ID does not exist" containerID="2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8" Feb 19 07:03:11 crc kubenswrapper[5012]: I0219 07:03:11.670754 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8"} err="failed to get container status \"2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8\": rpc error: code = NotFound desc = could not find container \"2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8\": container with ID starting with 2d24ea9d619bd52dd2440a89579342cf50b5b02793538289c8b06c305681bdd8 not found: ID does not exist" Feb 19 07:03:12 crc kubenswrapper[5012]: I0219 07:03:12.716289 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1690cd8-3b2d-461b-810a-4958ef591f15" path="/var/lib/kubelet/pods/f1690cd8-3b2d-461b-810a-4958ef591f15/volumes" Feb 19 07:03:18 crc kubenswrapper[5012]: I0219 07:03:18.704186 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:03:18 crc kubenswrapper[5012]: E0219 07:03:18.705385 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:03:29 crc kubenswrapper[5012]: I0219 07:03:29.703455 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:03:29 crc kubenswrapper[5012]: E0219 07:03:29.704338 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:03:43 crc kubenswrapper[5012]: I0219 07:03:43.703110 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:03:43 crc kubenswrapper[5012]: E0219 07:03:43.703838 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:03:54 crc kubenswrapper[5012]: I0219 07:03:54.422554 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f669f7d76-2qg4s_875bbaf1-6c43-4474-9f7b-8202b2d5ee1c/barbican-api/0.log" Feb 19 07:03:54 crc kubenswrapper[5012]: I0219 07:03:54.577752 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7f669f7d76-2qg4s_875bbaf1-6c43-4474-9f7b-8202b2d5ee1c/barbican-api-log/0.log" Feb 19 07:03:54 crc kubenswrapper[5012]: I0219 07:03:54.624180 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bb75756b-hd4xs_ee216ad2-2baf-4bba-a3fe-81acf9218af0/barbican-keystone-listener/0.log" Feb 19 07:03:54 crc kubenswrapper[5012]: I0219 07:03:54.738263 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5bb75756b-hd4xs_ee216ad2-2baf-4bba-a3fe-81acf9218af0/barbican-keystone-listener-log/0.log" Feb 19 07:03:54 crc kubenswrapper[5012]: I0219 07:03:54.824731 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-779bfc8b79-ffj7v_9133f0f1-2d9e-462e-ba56-8a206f61bd03/barbican-worker/0.log" Feb 19 07:03:54 crc kubenswrapper[5012]: I0219 07:03:54.916398 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-779bfc8b79-ffj7v_9133f0f1-2d9e-462e-ba56-8a206f61bd03/barbican-worker-log/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.043951 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-dgngb_ebf47868-aec9-4f2e-8c08-499161f45b18/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.184093 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9647feae-5291-41e1-9bb4-631f661552b9/ceilometer-central-agent/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.209686 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9647feae-5291-41e1-9bb4-631f661552b9/ceilometer-notification-agent/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.300472 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9647feae-5291-41e1-9bb4-631f661552b9/proxy-httpd/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.334320 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_9647feae-5291-41e1-9bb4-631f661552b9/sg-core/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.508097 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4c548edc-6755-4310-9b8d-780a384ec6bd/cinder-api-log/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.693342 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_42946b07-c256-43a7-99d0-45f94c019663/cinder-scheduler/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.699701 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_4c548edc-6755-4310-9b8d-780a384ec6bd/cinder-api/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.749328 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_42946b07-c256-43a7-99d0-45f94c019663/probe/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.928929 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-8sh74_a37d4335-7c06-4fa3-af51-6cfe6fb9a020/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:03:55 crc kubenswrapper[5012]: I0219 07:03:55.969513 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-bg5db_8fd6fe7a-c63f-4a29-a524-c3e5bc6888e3/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:03:56 crc kubenswrapper[5012]: I0219 07:03:56.100462 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-567c7bc999-cgf2v_c2eab861-ab13-4ab1-b57f-fecf9e95b9be/init/0.log" Feb 19 07:03:56 crc kubenswrapper[5012]: I0219 07:03:56.310938 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-567c7bc999-cgf2v_c2eab861-ab13-4ab1-b57f-fecf9e95b9be/init/0.log" Feb 19 07:03:56 crc kubenswrapper[5012]: I0219 07:03:56.390807 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-l597r_02358307-dba6-44fa-9799-2440b1496c55/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:03:56 crc kubenswrapper[5012]: I0219 07:03:56.441207 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-567c7bc999-cgf2v_c2eab861-ab13-4ab1-b57f-fecf9e95b9be/dnsmasq-dns/0.log" Feb 19 07:03:56 crc kubenswrapper[5012]: I0219 07:03:56.560899 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8cfddc12-1c4c-4faf-9edb-71fb80608785/glance-log/0.log" Feb 19 07:03:56 crc kubenswrapper[5012]: I0219 07:03:56.597797 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_8cfddc12-1c4c-4faf-9edb-71fb80608785/glance-httpd/0.log" Feb 19 07:03:56 crc kubenswrapper[5012]: I0219 07:03:56.932831 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f55309b7-09e5-4496-8995-f03681386729/glance-log/0.log" Feb 19 07:03:56 crc kubenswrapper[5012]: I0219 07:03:56.958106 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f55309b7-09e5-4496-8995-f03681386729/glance-httpd/0.log" Feb 19 07:03:57 crc kubenswrapper[5012]: I0219 07:03:57.133016 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cdcb467fb-8tvnz_6c937bbe-f068-4e5b-81ad-9455104062da/horizon/0.log" Feb 19 07:03:57 crc kubenswrapper[5012]: I0219 07:03:57.223558 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-mvtl5_d869003b-7b03-4a8b-9f9c-73ca0ec4f359/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:03:57 crc kubenswrapper[5012]: I0219 07:03:57.478138 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-kjhk7_0037b322-99bb-4ae2-aba4-85ddcd8243ae/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:03:57 crc kubenswrapper[5012]: I0219 07:03:57.699280 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29524681-x9bcr_86c7e36d-88e3-432a-ad6f-74de626c5f30/keystone-cron/0.log" Feb 19 07:03:57 crc kubenswrapper[5012]: I0219 07:03:57.817191 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6cdcb467fb-8tvnz_6c937bbe-f068-4e5b-81ad-9455104062da/horizon-log/0.log" Feb 19 07:03:57 crc kubenswrapper[5012]: I0219 07:03:57.896862 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29524741-6zcg8_033cc9db-2d87-48a6-8854-4d3a922a38d2/keystone-cron/0.log" Feb 19 07:03:58 crc kubenswrapper[5012]: I0219 07:03:58.032844 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7b574779c9-x2bsv_0e0a6a9f-d11f-4084-9742-7780b20fae75/keystone-api/0.log" Feb 19 07:03:58 crc kubenswrapper[5012]: I0219 07:03:58.045789 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_cc79bf66-4a34-43fe-ad03-4e6ce60d2c44/kube-state-metrics/0.log" Feb 19 07:03:58 crc kubenswrapper[5012]: I0219 07:03:58.123953 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-2n79s_fcace677-35b0-499f-998c-99168fbfa0af/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:03:58 crc kubenswrapper[5012]: I0219 07:03:58.481417 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-k6xl2_534720dc-6ff8-4fdc-9337-6fe77ad1eaa8/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:03:58 crc kubenswrapper[5012]: I0219 07:03:58.600433 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5ff88b6c7c-5bg66_eb805277-3dfc-4810-9845-3ba928d262c2/neutron-httpd/0.log" Feb 19 07:03:58 crc kubenswrapper[5012]: I0219 07:03:58.674092 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5ff88b6c7c-5bg66_eb805277-3dfc-4810-9845-3ba928d262c2/neutron-api/0.log" Feb 19 07:03:58 crc kubenswrapper[5012]: I0219 07:03:58.681660 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_3c628866-f96d-4e7b-8846-7073c98dd389/setup-container/0.log" Feb 19 07:03:58 crc kubenswrapper[5012]: I0219 07:03:58.702655 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:03:58 crc kubenswrapper[5012]: I0219 07:03:58.996713 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_3c628866-f96d-4e7b-8846-7073c98dd389/setup-container/0.log" Feb 19 07:03:59 crc kubenswrapper[5012]: I0219 07:03:59.092568 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"af85ae40f975af2e29f1da72c10ee6d4757cf3bb8cc11b605a9e59a2b37a565b"} Feb 19 07:03:59 crc kubenswrapper[5012]: I0219 07:03:59.092878 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_3c628866-f96d-4e7b-8846-7073c98dd389/rabbitmq/0.log" Feb 19 07:03:59 crc kubenswrapper[5012]: I0219 07:03:59.699575 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_6852caab-c1b6-40cd-b5df-88d22f6016bd/nova-cell0-conductor-conductor/0.log" Feb 19 07:04:00 crc kubenswrapper[5012]: I0219 07:04:00.050585 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_aceef718-9d1c-441d-bf1b-92c0a6831def/nova-cell1-conductor-conductor/0.log" Feb 19 07:04:00 crc kubenswrapper[5012]: I0219 07:04:00.367577 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_661e04e4-4ba2-4ea0-9ba6-3af2949e7e21/nova-cell1-novncproxy-novncproxy/0.log" Feb 19 07:04:00 crc kubenswrapper[5012]: I0219 07:04:00.479333 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c1a529b0-65f7-4680-a4fd-4dacebc1ab83/nova-api-log/0.log" Feb 19 07:04:00 crc kubenswrapper[5012]: I0219 07:04:00.530175 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-p67w4_a6116441-2985-4723-9889-6c3422159243/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:04:00 crc kubenswrapper[5012]: I0219 07:04:00.998371 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c1a529b0-65f7-4680-a4fd-4dacebc1ab83/nova-api-api/0.log" Feb 19 07:04:01 crc kubenswrapper[5012]: I0219 07:04:01.001896 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_396b18f9-9859-4b42-aca1-c29c3724c86c/nova-metadata-log/0.log" Feb 19 07:04:01 crc kubenswrapper[5012]: I0219 07:04:01.348791 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_04466d10-2177-4361-bd86-333c046b9e52/mysql-bootstrap/0.log" Feb 19 07:04:01 crc kubenswrapper[5012]: I0219 07:04:01.538637 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_04466d10-2177-4361-bd86-333c046b9e52/mysql-bootstrap/0.log" Feb 19 07:04:01 crc kubenswrapper[5012]: I0219 07:04:01.551852 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6cfb0ed7-fe80-4d03-9ecb-31587c57bfd0/nova-scheduler-scheduler/0.log" Feb 19 07:04:01 crc kubenswrapper[5012]: I0219 07:04:01.616476 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_04466d10-2177-4361-bd86-333c046b9e52/galera/0.log" Feb 19 07:04:01 crc kubenswrapper[5012]: I0219 07:04:01.775193 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1fd0c672-e258-4feb-8bbd-26135f92f7fb/mysql-bootstrap/0.log" Feb 19 07:04:01 crc kubenswrapper[5012]: I0219 07:04:01.973187 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1fd0c672-e258-4feb-8bbd-26135f92f7fb/mysql-bootstrap/0.log" Feb 19 07:04:02 crc kubenswrapper[5012]: I0219 07:04:02.007963 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_1fd0c672-e258-4feb-8bbd-26135f92f7fb/galera/0.log" Feb 19 07:04:02 crc kubenswrapper[5012]: I0219 07:04:02.227751 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_75258dbe-c223-4e55-92a6-8e588745294a/openstackclient/0.log" Feb 19 07:04:02 crc kubenswrapper[5012]: I0219 07:04:02.316809 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-cr94m_e2c9ac17-43ef-4ccb-83b1-e20ee03289de/ovn-controller/0.log" Feb 19 07:04:02 crc kubenswrapper[5012]: I0219 07:04:02.464070 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mz9j9_c711491e-0b8b-4737-88c9-bc5e37051ac1/openstack-network-exporter/0.log" Feb 19 07:04:02 crc kubenswrapper[5012]: I0219 07:04:02.721715 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7qdpg_16fbaba1-bd32-4121-8743-99422db74180/ovsdb-server-init/0.log" Feb 19 07:04:02 crc kubenswrapper[5012]: I0219 07:04:02.956988 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7qdpg_16fbaba1-bd32-4121-8743-99422db74180/ovsdb-server/0.log" Feb 19 07:04:02 crc kubenswrapper[5012]: I0219 07:04:02.966071 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7qdpg_16fbaba1-bd32-4121-8743-99422db74180/ovsdb-server-init/0.log" Feb 19 07:04:03 crc kubenswrapper[5012]: I0219 07:04:03.157082 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_396b18f9-9859-4b42-aca1-c29c3724c86c/nova-metadata-metadata/0.log" Feb 19 07:04:03 crc kubenswrapper[5012]: I0219 07:04:03.220054 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-gxxmx_7335769e-5b13-4d1b-8aa7-e7f192ee9e2b/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:04:03 crc kubenswrapper[5012]: I0219 07:04:03.382390 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7qdpg_16fbaba1-bd32-4121-8743-99422db74180/ovs-vswitchd/0.log" Feb 19 07:04:03 crc kubenswrapper[5012]: I0219 07:04:03.427833 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e3e8f67d-0748-4bff-b7c5-8432c7e4ab64/openstack-network-exporter/0.log" Feb 19 07:04:03 crc kubenswrapper[5012]: I0219 07:04:03.467240 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_e3e8f67d-0748-4bff-b7c5-8432c7e4ab64/ovn-northd/0.log" Feb 19 07:04:03 crc kubenswrapper[5012]: I0219 07:04:03.646455 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5a9e6735-4159-4248-a8f5-6714d386901a/openstack-network-exporter/0.log" Feb 19 07:04:03 crc kubenswrapper[5012]: I0219 07:04:03.661471 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5a9e6735-4159-4248-a8f5-6714d386901a/ovsdbserver-nb/0.log" Feb 19 07:04:03 crc kubenswrapper[5012]: I0219 07:04:03.863802 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_00790bd0-5fbb-4927-8361-085c9691c171/openstack-network-exporter/0.log" Feb 19 07:04:03 crc kubenswrapper[5012]: I0219 07:04:03.890711 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_00790bd0-5fbb-4927-8361-085c9691c171/ovsdbserver-sb/0.log" Feb 19 07:04:04 crc kubenswrapper[5012]: I0219 07:04:04.185425 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a64b2810-4982-43ef-ae9f-1e7852394d60/init-config-reloader/0.log" Feb 19 07:04:04 crc kubenswrapper[5012]: I0219 07:04:04.249116 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f94997dd8-cvnfv_b0ce1e0a-4e51-408c-b3f8-500cf6476b96/placement-api/0.log" Feb 19 07:04:04 crc kubenswrapper[5012]: I0219 07:04:04.342840 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f94997dd8-cvnfv_b0ce1e0a-4e51-408c-b3f8-500cf6476b96/placement-log/0.log" Feb 19 07:04:04 crc kubenswrapper[5012]: I0219 07:04:04.365480 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a64b2810-4982-43ef-ae9f-1e7852394d60/init-config-reloader/0.log" Feb 19 07:04:04 crc kubenswrapper[5012]: I0219 07:04:04.435979 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a64b2810-4982-43ef-ae9f-1e7852394d60/config-reloader/0.log" Feb 19 07:04:04 crc kubenswrapper[5012]: I0219 07:04:04.493120 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a64b2810-4982-43ef-ae9f-1e7852394d60/prometheus/0.log" Feb 19 07:04:04 crc kubenswrapper[5012]: I0219 07:04:04.584231 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_a64b2810-4982-43ef-ae9f-1e7852394d60/thanos-sidecar/0.log" Feb 19 07:04:04 crc kubenswrapper[5012]: I0219 07:04:04.694564 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4984f0c1-33e8-4506-b6d7-e554dca0e4c8/setup-container/0.log" Feb 19 07:04:05 crc kubenswrapper[5012]: I0219 07:04:05.081954 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4984f0c1-33e8-4506-b6d7-e554dca0e4c8/setup-container/0.log" Feb 19 07:04:05 crc kubenswrapper[5012]: I0219 07:04:05.144583 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_4984f0c1-33e8-4506-b6d7-e554dca0e4c8/rabbitmq/0.log" Feb 19 07:04:05 crc kubenswrapper[5012]: I0219 07:04:05.218197 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c3230f97-dbe4-42a2-b009-a8370c601e78/setup-container/0.log" Feb 19 07:04:05 crc kubenswrapper[5012]: I0219 07:04:05.442482 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c3230f97-dbe4-42a2-b009-a8370c601e78/setup-container/0.log" Feb 19 07:04:05 crc kubenswrapper[5012]: I0219 07:04:05.459633 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_c3230f97-dbe4-42a2-b009-a8370c601e78/rabbitmq/0.log" Feb 19 07:04:05 crc kubenswrapper[5012]: I0219 07:04:05.483709 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-5l2fs_464de984-0dd6-4c4d-aed3-afbf84e0cdcf/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:04:05 crc kubenswrapper[5012]: I0219 07:04:05.673760 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-skvzd_07c11c1d-edfd-43a2-97fc-d5fdfbbee0bf/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:04:05 crc kubenswrapper[5012]: I0219 07:04:05.739026 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-pl267_61bd41ab-cfea-4df2-9be0-8321c6c11ebd/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:04:05 crc kubenswrapper[5012]: I0219 07:04:05.920473 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-7xnxl_86b984ed-bd52-4348-9415-dccff4a0e1a4/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.007272 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-9rlns_f7c29e8e-a085-4dcc-8dbf-7fa1f971a4dc/ssh-known-hosts-edpm-deployment/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.235777 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-59bfbf7475-v98h9_4c9aa274-240d-4d50-b38a-754dd493f351/proxy-server/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.382158 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-59bfbf7475-v98h9_4c9aa274-240d-4d50-b38a-754dd493f351/proxy-httpd/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.425762 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-5vxhd_d05da3bc-6c22-4956-9fab-331eed79d175/swift-ring-rebalance/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.581956 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/account-auditor/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.631961 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/account-reaper/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.775383 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/account-replicator/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.791154 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/account-server/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.849791 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/container-auditor/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.908998 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/container-replicator/0.log" Feb 19 07:04:06 crc kubenswrapper[5012]: I0219 07:04:06.977559 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/container-server/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.016774 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/container-updater/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.101100 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/object-auditor/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.171470 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/object-expirer/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.211929 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/object-replicator/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.275572 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/object-server/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.358397 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/object-updater/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.401883 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/rsync/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.483660 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c089afc3-1655-4675-b4e1-a62ec6929498/swift-recon-cron/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.708955 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-2cwrx_73fe066f-3ee6-4ffc-aeb4-874c14fb0b84/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.768622 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_54eccb09-b3ec-45bc-8065-4c5eb9516257/tempest-tests-tempest-tests-runner/0.log" Feb 19 07:04:07 crc kubenswrapper[5012]: I0219 07:04:07.893348 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_78c125a8-bf69-4524-9b70-be9fe9f313e7/test-operator-logs-container/0.log" Feb 19 07:04:08 crc kubenswrapper[5012]: I0219 07:04:08.016602 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6z6p6_cdccd552-e703-4d8d-86b4-ff481671527f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 07:04:08 crc kubenswrapper[5012]: I0219 07:04:08.922139 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_d4778529-f7d0-482b-bd67-003aaa7ca0ae/watcher-applier/0.log" Feb 19 07:04:09 crc kubenswrapper[5012]: I0219 07:04:09.404664 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_7d74d5de-7e1d-47cc-8aaa-cb303332a03a/watcher-api-log/0.log" Feb 19 07:04:12 crc kubenswrapper[5012]: I0219 07:04:12.041852 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_f87036fc-fa94-4038-8b65-bb85d8ff6f10/watcher-decision-engine/0.log" Feb 19 07:04:13 crc kubenswrapper[5012]: I0219 07:04:13.307859 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_7d74d5de-7e1d-47cc-8aaa-cb303332a03a/watcher-api/0.log" Feb 19 07:04:23 crc kubenswrapper[5012]: I0219 07:04:23.716284 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_38a4a51f-c380-48fc-8f0e-cdd1ea09fa53/memcached/0.log" Feb 19 07:04:41 crc kubenswrapper[5012]: I0219 07:04:41.904782 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/util/0.log" Feb 19 07:04:42 crc kubenswrapper[5012]: I0219 07:04:42.069081 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/util/0.log" Feb 19 07:04:42 crc kubenswrapper[5012]: I0219 07:04:42.113692 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/pull/0.log" Feb 19 07:04:42 crc kubenswrapper[5012]: I0219 07:04:42.126458 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/pull/0.log" Feb 19 07:04:42 crc kubenswrapper[5012]: I0219 07:04:42.309991 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/extract/0.log" Feb 19 07:04:42 crc kubenswrapper[5012]: I0219 07:04:42.332613 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/util/0.log" Feb 19 07:04:42 crc kubenswrapper[5012]: I0219 07:04:42.359634 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f52c407bdc9ecc5c9ed04cde121370cff57ca187d042afc6ea79b7967xjx5q_59bb7d65-7d8f-487c-b586-7cd4be8eab12/pull/0.log" Feb 19 07:04:42 crc kubenswrapper[5012]: I0219 07:04:42.728334 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-kt4nw_11d49fcd-6e31-47e5-84a1-c6ae972e13cb/manager/0.log" Feb 19 07:04:43 crc kubenswrapper[5012]: I0219 07:04:43.059209 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-qzq7x_8b3edb91-d9bc-4f6f-9cf5-5d40f05bf3be/manager/0.log" Feb 19 07:04:43 crc kubenswrapper[5012]: I0219 07:04:43.216924 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-5szxp_bfca307c-9b00-4c12-bdd6-a394b7cc7cfd/manager/0.log" Feb 19 07:04:43 crc kubenswrapper[5012]: I0219 07:04:43.476427 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-csct6_4f281b5b-b656-4d4a-b628-d4bfe4fc94f9/manager/0.log" Feb 19 07:04:44 crc kubenswrapper[5012]: I0219 07:04:44.002967 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-dgldv_8629b5e4-e6a8-4c47-b76b-f58a26b42912/manager/0.log" Feb 19 07:04:44 crc kubenswrapper[5012]: I0219 07:04:44.208994 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-cp8kx_996bfd61-486b-432d-9e09-d3a90ff9124c/manager/0.log" Feb 19 07:04:44 crc kubenswrapper[5012]: I0219 07:04:44.543827 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-9zkvx_dc8b43fc-06e4-4408-84fd-8a9e0fdf2f43/manager/0.log" Feb 19 07:04:44 crc kubenswrapper[5012]: I0219 07:04:44.683345 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-ldrx5_e9e07b56-2724-4046-8a60-81b751fb0588/manager/0.log" Feb 19 07:04:44 crc kubenswrapper[5012]: I0219 07:04:44.857121 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-556xv_8af03a54-ad7a-4684-b5a6-ba83f410e6ed/manager/0.log" Feb 19 07:04:44 crc kubenswrapper[5012]: I0219 07:04:44.888983 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-rpbt8_1e872b11-03d6-4d3f-8e06-e10e1e73d917/manager/0.log" Feb 19 07:04:45 crc kubenswrapper[5012]: I0219 07:04:45.139655 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-27hfc_b123191d-e55b-4ddc-90ea-abcb34c97be2/manager/0.log" Feb 19 07:04:45 crc kubenswrapper[5012]: I0219 07:04:45.325960 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-l65c5_457202a7-ae9f-4d06-8690-d220e532b305/manager/0.log" Feb 19 07:04:45 crc kubenswrapper[5012]: I0219 07:04:45.826115 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-fb5fcc5b8-52rb4_d6eb3922-90e6-4bb1-8caa-aac6b69c76b0/manager/0.log" Feb 19 07:04:46 crc kubenswrapper[5012]: I0219 07:04:46.111141 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6679bf9b57-q57bk_76b34ac4-96f1-4bbc-9969-eb3e1cfc2159/operator/0.log" Feb 19 07:04:46 crc kubenswrapper[5012]: I0219 07:04:46.495262 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cl447_797c14cf-1b4d-4b4e-9dc5-4843e2e77cef/registry-server/0.log" Feb 19 07:04:47 crc kubenswrapper[5012]: I0219 07:04:47.142121 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-25qtj_10e6fa53-581b-4965-8a38-c70a5c61c6d7/manager/0.log" Feb 19 07:04:47 crc kubenswrapper[5012]: I0219 07:04:47.237026 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-nlqtw_08a4f79c-e42e-4609-b104-01b9a05ac95a/manager/0.log" Feb 19 07:04:47 crc kubenswrapper[5012]: I0219 07:04:47.442564 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-mqc2w_4a3cde05-282a-4c65-9570-74d04c71a034/operator/0.log" Feb 19 07:04:47 crc kubenswrapper[5012]: I0219 07:04:47.669862 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-6hfg4_c55ed223-371b-409a-bcb6-8ca6d2a3c908/manager/0.log" Feb 19 07:04:48 crc kubenswrapper[5012]: I0219 07:04:48.050830 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-69ff7bc449-tj54n_d1f124a8-4132-458d-a5a5-1839d31e7772/manager/0.log" Feb 19 07:04:48 crc kubenswrapper[5012]: I0219 07:04:48.203413 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-qjpw6_49d66f3b-e451-4b73-bc6a-4b854a71a4d6/manager/0.log" Feb 19 07:04:48 crc kubenswrapper[5012]: I0219 07:04:48.250188 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-pcpk8_73e25e30-860d-4faf-b1f3-bc284f7189d1/manager/0.log" Feb 19 07:04:48 crc kubenswrapper[5012]: I0219 07:04:48.456929 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-pqrs7_ef60eda4-7ead-499b-b70f-07a34574096f/manager/0.log" Feb 19 07:04:48 crc kubenswrapper[5012]: I0219 07:04:48.488273 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-z5r47_739941d0-4bff-4dae-8f01-636386a37dd0/manager/0.log" Feb 19 07:04:53 crc kubenswrapper[5012]: I0219 07:04:53.319990 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-xzk2n_0cc1b41b-fbf6-4d0c-b721-dcad09c03feb/manager/0.log" Feb 19 07:05:10 crc kubenswrapper[5012]: I0219 07:05:10.997222 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mbxqf_9102ddf1-e140-48e7-9ecd-14a4c007f5d5/control-plane-machine-set-operator/0.log" Feb 19 07:05:11 crc kubenswrapper[5012]: I0219 07:05:11.226403 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6qvzq_5c537eae-5a27-4a4d-ba9e-0fd7efe72f37/kube-rbac-proxy/0.log" Feb 19 07:05:11 crc kubenswrapper[5012]: I0219 07:05:11.233806 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6qvzq_5c537eae-5a27-4a4d-ba9e-0fd7efe72f37/machine-api-operator/0.log" Feb 19 07:05:25 crc kubenswrapper[5012]: I0219 07:05:25.355286 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-sq68l_3c776e3c-32bf-4f6d-89b7-75bc3e1d3e02/cert-manager-controller/0.log" Feb 19 07:05:25 crc kubenswrapper[5012]: I0219 07:05:25.532669 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-w66zf_4b5870bd-8fb3-4eef-a893-f31ce8bb1506/cert-manager-cainjector/0.log" Feb 19 07:05:25 crc kubenswrapper[5012]: I0219 07:05:25.567192 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-drndq_53138562-0907-4b72-b228-21ef0c561f57/cert-manager-webhook/0.log" Feb 19 07:05:39 crc kubenswrapper[5012]: I0219 07:05:39.830737 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-zvl62_0aad4d6c-fc60-4843-b21b-d4ad6d552d5f/nmstate-console-plugin/0.log" Feb 19 07:05:40 crc kubenswrapper[5012]: I0219 07:05:40.064178 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-tdz8p_4b5e9e17-84bc-4d05-87f9-328826ea39df/nmstate-handler/0.log" Feb 19 07:05:40 crc kubenswrapper[5012]: I0219 07:05:40.184757 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-hn274_91d45b3f-23b3-4342-8168-667f665ffe82/nmstate-metrics/0.log" Feb 19 07:05:40 crc kubenswrapper[5012]: I0219 07:05:40.196481 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-hn274_91d45b3f-23b3-4342-8168-667f665ffe82/kube-rbac-proxy/0.log" Feb 19 07:05:40 crc kubenswrapper[5012]: I0219 07:05:40.340719 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-2smgj_d6ac1260-4ff8-4025-af6e-35711452ef6f/nmstate-operator/0.log" Feb 19 07:05:40 crc kubenswrapper[5012]: I0219 07:05:40.402399 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-mqtfh_50749fb3-e43e-4874-a0ea-8dabae225f85/nmstate-webhook/0.log" Feb 19 07:05:55 crc kubenswrapper[5012]: I0219 07:05:55.751977 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-9t66t_9f3d925a-f08d-4e92-baf3-805f27c9ae35/prometheus-operator/0.log" Feb 19 07:05:55 crc kubenswrapper[5012]: I0219 07:05:55.908570 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-685558f558-cddcp_9364b7f3-e3e3-4432-a4e7-4b80c9a50225/prometheus-operator-admission-webhook/0.log" Feb 19 07:05:55 crc kubenswrapper[5012]: I0219 07:05:55.995411 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-685558f558-rlcjg_3c60bb85-2242-4d9f-95f9-27b2e747727d/prometheus-operator-admission-webhook/0.log" Feb 19 07:05:56 crc kubenswrapper[5012]: I0219 07:05:56.117622 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-vw7xl_63ee166b-5027-4928-9196-9488685f87d5/operator/0.log" Feb 19 07:05:56 crc kubenswrapper[5012]: I0219 07:05:56.153324 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-5grbr_86bcbf15-9553-41af-974c-3418e588e575/perses-operator/0.log" Feb 19 07:06:10 crc kubenswrapper[5012]: I0219 07:06:10.894224 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-c4jbq_fe949ecf-1cb7-47c7-b196-d4851f142c5f/kube-rbac-proxy/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.025364 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-c4jbq_fe949ecf-1cb7-47c7-b196-d4851f142c5f/controller/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.131135 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-frr-files/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.279662 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-frr-files/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.283533 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-reloader/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.308057 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-metrics/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.335754 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-reloader/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.507189 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-frr-files/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.524097 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-metrics/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.542754 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-reloader/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.549868 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-metrics/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.667196 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-frr-files/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.672002 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-reloader/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.720527 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/cp-metrics/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.730977 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/controller/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.861132 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/frr-metrics/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.951694 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/kube-rbac-proxy/0.log" Feb 19 07:06:11 crc kubenswrapper[5012]: I0219 07:06:11.958543 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/kube-rbac-proxy-frr/0.log" Feb 19 07:06:12 crc kubenswrapper[5012]: I0219 07:06:12.040566 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/reloader/0.log" Feb 19 07:06:12 crc kubenswrapper[5012]: I0219 07:06:12.190432 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-hdb84_431a9bf4-479e-4255-9664-554c80fa4376/frr-k8s-webhook-server/0.log" Feb 19 07:06:12 crc kubenswrapper[5012]: I0219 07:06:12.378994 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-558c5c4774-9r4gj_05b78fff-bf4d-4cd6-aba9-b74303a5dd50/manager/0.log" Feb 19 07:06:12 crc kubenswrapper[5012]: I0219 07:06:12.535606 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-699bc447bd-zqv74_ec7fdada-6f6e-4d8b-b2e1-c944050c714c/webhook-server/0.log" Feb 19 07:06:12 crc kubenswrapper[5012]: I0219 07:06:12.719258 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-87ct4_82cb6684-3937-45f8-9f18-56940e88f480/kube-rbac-proxy/0.log" Feb 19 07:06:13 crc kubenswrapper[5012]: I0219 07:06:13.267147 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-87ct4_82cb6684-3937-45f8-9f18-56940e88f480/speaker/0.log" Feb 19 07:06:13 crc kubenswrapper[5012]: I0219 07:06:13.506630 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4d76m_48b2548c-eb36-4c42-a84f-2d3f2084a46f/frr/0.log" Feb 19 07:06:14 crc kubenswrapper[5012]: I0219 07:06:14.430252 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 07:06:14 crc kubenswrapper[5012]: I0219 07:06:14.430321 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 07:06:28 crc kubenswrapper[5012]: I0219 07:06:28.506128 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/util/0.log" Feb 19 07:06:28 crc kubenswrapper[5012]: I0219 07:06:28.735134 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/pull/0.log" Feb 19 07:06:28 crc kubenswrapper[5012]: I0219 07:06:28.747777 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/util/0.log" Feb 19 07:06:28 crc kubenswrapper[5012]: I0219 07:06:28.776152 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/pull/0.log" Feb 19 07:06:28 crc kubenswrapper[5012]: I0219 07:06:28.958520 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/pull/0.log" Feb 19 07:06:28 crc kubenswrapper[5012]: I0219 07:06:28.973914 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/util/0.log" Feb 19 07:06:28 crc kubenswrapper[5012]: I0219 07:06:28.991662 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08csz2x_5efec1ed-3f58-4825-a63a-ceb26c38531e/extract/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.102494 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/util/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.320257 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/util/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.341122 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/pull/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.375800 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/pull/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.563988 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/extract/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.565414 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/pull/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.593932 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213zpwbf_ee5d7005-f5b3-4a68-8ae6-e74db1bd0778/util/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.758355 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-utilities/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.944444 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-content/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.952717 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-content/0.log" Feb 19 07:06:29 crc kubenswrapper[5012]: I0219 07:06:29.966060 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-utilities/0.log" Feb 19 07:06:30 crc kubenswrapper[5012]: I0219 07:06:30.143367 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-utilities/0.log" Feb 19 07:06:30 crc kubenswrapper[5012]: I0219 07:06:30.146551 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/extract-content/0.log" Feb 19 07:06:30 crc kubenswrapper[5012]: I0219 07:06:30.363512 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-utilities/0.log" Feb 19 07:06:30 crc kubenswrapper[5012]: I0219 07:06:30.664836 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-utilities/0.log" Feb 19 07:06:30 crc kubenswrapper[5012]: I0219 07:06:30.701760 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-content/0.log" Feb 19 07:06:30 crc kubenswrapper[5012]: I0219 07:06:30.731249 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-content/0.log" Feb 19 07:06:30 crc kubenswrapper[5012]: I0219 07:06:30.803782 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-zflwk_b555779a-946d-4ad9-93a6-2b0673f81cfa/registry-server/0.log" Feb 19 07:06:30 crc kubenswrapper[5012]: I0219 07:06:30.845461 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-utilities/0.log" Feb 19 07:06:30 crc kubenswrapper[5012]: I0219 07:06:30.872392 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/extract-content/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.126525 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/util/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.185633 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pxf2x_4d86775d-0772-4adf-9ed9-c7b3016d97e7/registry-server/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.298987 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/pull/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.345019 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/util/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.362361 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/pull/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.486761 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/pull/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.517577 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/extract/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.520263 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecatqwsj_6865121b-f9c2-439e-a64a-bf7d94f35797/util/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.657864 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-jqjls_800f8349-6ef3-44ae-90a0-56c89ca82479/marketplace-operator/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.705545 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-utilities/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.898536 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-content/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.898857 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-utilities/0.log" Feb 19 07:06:31 crc kubenswrapper[5012]: I0219 07:06:31.903759 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-content/0.log" Feb 19 07:06:32 crc kubenswrapper[5012]: I0219 07:06:32.073844 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-content/0.log" Feb 19 07:06:32 crc kubenswrapper[5012]: I0219 07:06:32.100653 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/extract-utilities/0.log" Feb 19 07:06:32 crc kubenswrapper[5012]: I0219 07:06:32.291072 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m458l_81c19ca5-841c-4d69-b2ca-a7649d14492f/registry-server/0.log" Feb 19 07:06:32 crc kubenswrapper[5012]: I0219 07:06:32.303934 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-utilities/0.log" Feb 19 07:06:32 crc kubenswrapper[5012]: I0219 07:06:32.457500 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-utilities/0.log" Feb 19 07:06:32 crc kubenswrapper[5012]: I0219 07:06:32.495719 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-content/0.log" Feb 19 07:06:32 crc kubenswrapper[5012]: I0219 07:06:32.515755 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-content/0.log" Feb 19 07:06:32 crc kubenswrapper[5012]: I0219 07:06:32.844427 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-content/0.log" Feb 19 07:06:32 crc kubenswrapper[5012]: I0219 07:06:32.863775 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/extract-utilities/0.log" Feb 19 07:06:33 crc kubenswrapper[5012]: I0219 07:06:33.555820 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-cxb7f_cecb9fea-b109-4267-918f-765d774f76de/registry-server/0.log" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.430959 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.431519 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.661232 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bpf8b"] Feb 19 07:06:44 crc kubenswrapper[5012]: E0219 07:06:44.661713 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216e052a-9145-4dba-a625-f9262c5f27cb" containerName="container-00" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.661729 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="216e052a-9145-4dba-a625-f9262c5f27cb" containerName="container-00" Feb 19 07:06:44 crc kubenswrapper[5012]: E0219 07:06:44.661745 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerName="extract-utilities" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.661752 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerName="extract-utilities" Feb 19 07:06:44 crc kubenswrapper[5012]: E0219 07:06:44.661770 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerName="registry-server" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.661776 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerName="registry-server" Feb 19 07:06:44 crc kubenswrapper[5012]: E0219 07:06:44.661798 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerName="extract-content" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.661804 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerName="extract-content" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.661980 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1690cd8-3b2d-461b-810a-4958ef591f15" containerName="registry-server" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.661990 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="216e052a-9145-4dba-a625-f9262c5f27cb" containerName="container-00" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.663241 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.691711 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpf8b"] Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.742574 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl584\" (UniqueName: \"kubernetes.io/projected/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-kube-api-access-nl584\") pod \"redhat-marketplace-bpf8b\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.742672 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-catalog-content\") pod \"redhat-marketplace-bpf8b\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.743007 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-utilities\") pod \"redhat-marketplace-bpf8b\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.845857 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-utilities\") pod \"redhat-marketplace-bpf8b\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.846036 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl584\" (UniqueName: \"kubernetes.io/projected/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-kube-api-access-nl584\") pod \"redhat-marketplace-bpf8b\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.846066 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-catalog-content\") pod \"redhat-marketplace-bpf8b\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.846556 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-utilities\") pod \"redhat-marketplace-bpf8b\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.846580 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-catalog-content\") pod \"redhat-marketplace-bpf8b\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.873799 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl584\" (UniqueName: \"kubernetes.io/projected/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-kube-api-access-nl584\") pod \"redhat-marketplace-bpf8b\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:44 crc kubenswrapper[5012]: I0219 07:06:44.988585 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:45 crc kubenswrapper[5012]: I0219 07:06:45.475511 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpf8b"] Feb 19 07:06:45 crc kubenswrapper[5012]: I0219 07:06:45.791800 5012 generic.go:334] "Generic (PLEG): container finished" podID="3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" containerID="347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577" exitCode=0 Feb 19 07:06:45 crc kubenswrapper[5012]: I0219 07:06:45.791991 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpf8b" event={"ID":"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323","Type":"ContainerDied","Data":"347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577"} Feb 19 07:06:45 crc kubenswrapper[5012]: I0219 07:06:45.792092 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpf8b" event={"ID":"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323","Type":"ContainerStarted","Data":"6067c6764c5a57d594d7f6654dd024f673786b231a6517e6b1c39cbb30d6b688"} Feb 19 07:06:45 crc kubenswrapper[5012]: I0219 07:06:45.793964 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 07:06:46 crc kubenswrapper[5012]: I0219 07:06:46.918916 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-685558f558-rlcjg_3c60bb85-2242-4d9f-95f9-27b2e747727d/prometheus-operator-admission-webhook/0.log" Feb 19 07:06:46 crc kubenswrapper[5012]: I0219 07:06:46.938004 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-685558f558-cddcp_9364b7f3-e3e3-4432-a4e7-4b80c9a50225/prometheus-operator-admission-webhook/0.log" Feb 19 07:06:46 crc kubenswrapper[5012]: I0219 07:06:46.979400 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-9t66t_9f3d925a-f08d-4e92-baf3-805f27c9ae35/prometheus-operator/0.log" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.051391 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-745ds"] Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.053277 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.064823 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-745ds"] Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.093924 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-catalog-content\") pod \"community-operators-745ds\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.093981 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-utilities\") pod \"community-operators-745ds\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.094408 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f88j\" (UniqueName: \"kubernetes.io/projected/db783a8c-66a5-431b-bdb4-672b0e8991f1-kube-api-access-2f88j\") pod \"community-operators-745ds\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.150020 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-5grbr_86bcbf15-9553-41af-974c-3418e588e575/perses-operator/0.log" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.155463 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-vw7xl_63ee166b-5027-4928-9196-9488685f87d5/operator/0.log" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.195848 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f88j\" (UniqueName: \"kubernetes.io/projected/db783a8c-66a5-431b-bdb4-672b0e8991f1-kube-api-access-2f88j\") pod \"community-operators-745ds\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.195952 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-catalog-content\") pod \"community-operators-745ds\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.195983 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-utilities\") pod \"community-operators-745ds\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.196478 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-catalog-content\") pod \"community-operators-745ds\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.196504 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-utilities\") pod \"community-operators-745ds\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.214799 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f88j\" (UniqueName: \"kubernetes.io/projected/db783a8c-66a5-431b-bdb4-672b0e8991f1-kube-api-access-2f88j\") pod \"community-operators-745ds\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.577167 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:47 crc kubenswrapper[5012]: I0219 07:06:47.821812 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpf8b" event={"ID":"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323","Type":"ContainerStarted","Data":"2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1"} Feb 19 07:06:48 crc kubenswrapper[5012]: I0219 07:06:48.128244 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-745ds"] Feb 19 07:06:48 crc kubenswrapper[5012]: W0219 07:06:48.158936 5012 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb783a8c_66a5_431b_bdb4_672b0e8991f1.slice/crio-664820afd3683b476014c50f161c941619369e5d67fbd8aec3b3d67bfa28ed97 WatchSource:0}: Error finding container 664820afd3683b476014c50f161c941619369e5d67fbd8aec3b3d67bfa28ed97: Status 404 returned error can't find the container with id 664820afd3683b476014c50f161c941619369e5d67fbd8aec3b3d67bfa28ed97 Feb 19 07:06:48 crc kubenswrapper[5012]: I0219 07:06:48.835935 5012 generic.go:334] "Generic (PLEG): container finished" podID="db783a8c-66a5-431b-bdb4-672b0e8991f1" containerID="d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb" exitCode=0 Feb 19 07:06:48 crc kubenswrapper[5012]: I0219 07:06:48.836005 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745ds" event={"ID":"db783a8c-66a5-431b-bdb4-672b0e8991f1","Type":"ContainerDied","Data":"d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb"} Feb 19 07:06:48 crc kubenswrapper[5012]: I0219 07:06:48.836326 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745ds" event={"ID":"db783a8c-66a5-431b-bdb4-672b0e8991f1","Type":"ContainerStarted","Data":"664820afd3683b476014c50f161c941619369e5d67fbd8aec3b3d67bfa28ed97"} Feb 19 07:06:48 crc kubenswrapper[5012]: I0219 07:06:48.840491 5012 generic.go:334] "Generic (PLEG): container finished" podID="3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" containerID="2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1" exitCode=0 Feb 19 07:06:48 crc kubenswrapper[5012]: I0219 07:06:48.840519 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpf8b" event={"ID":"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323","Type":"ContainerDied","Data":"2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1"} Feb 19 07:06:49 crc kubenswrapper[5012]: I0219 07:06:49.859613 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpf8b" event={"ID":"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323","Type":"ContainerStarted","Data":"89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db"} Feb 19 07:06:49 crc kubenswrapper[5012]: I0219 07:06:49.866245 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745ds" event={"ID":"db783a8c-66a5-431b-bdb4-672b0e8991f1","Type":"ContainerStarted","Data":"64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd"} Feb 19 07:06:49 crc kubenswrapper[5012]: I0219 07:06:49.935604 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bpf8b" podStartSLOduration=2.452900948 podStartE2EDuration="5.93558546s" podCreationTimestamp="2026-02-19 07:06:44 +0000 UTC" firstStartedPulling="2026-02-19 07:06:45.79376172 +0000 UTC m=+6101.827084289" lastFinishedPulling="2026-02-19 07:06:49.276446232 +0000 UTC m=+6105.309768801" observedRunningTime="2026-02-19 07:06:49.892963323 +0000 UTC m=+6105.926285892" watchObservedRunningTime="2026-02-19 07:06:49.93558546 +0000 UTC m=+6105.968908029" Feb 19 07:06:50 crc kubenswrapper[5012]: I0219 07:06:50.877658 5012 generic.go:334] "Generic (PLEG): container finished" podID="db783a8c-66a5-431b-bdb4-672b0e8991f1" containerID="64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd" exitCode=0 Feb 19 07:06:50 crc kubenswrapper[5012]: I0219 07:06:50.877740 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745ds" event={"ID":"db783a8c-66a5-431b-bdb4-672b0e8991f1","Type":"ContainerDied","Data":"64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd"} Feb 19 07:06:51 crc kubenswrapper[5012]: E0219 07:06:51.076917 5012 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.110:58546->38.102.83.110:36123: read tcp 38.102.83.110:58546->38.102.83.110:36123: read: connection reset by peer Feb 19 07:06:51 crc kubenswrapper[5012]: I0219 07:06:51.891291 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745ds" event={"ID":"db783a8c-66a5-431b-bdb4-672b0e8991f1","Type":"ContainerStarted","Data":"64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55"} Feb 19 07:06:51 crc kubenswrapper[5012]: I0219 07:06:51.907147 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-745ds" podStartSLOduration=2.45812469 podStartE2EDuration="4.907132478s" podCreationTimestamp="2026-02-19 07:06:47 +0000 UTC" firstStartedPulling="2026-02-19 07:06:48.838155865 +0000 UTC m=+6104.871478434" lastFinishedPulling="2026-02-19 07:06:51.287163653 +0000 UTC m=+6107.320486222" observedRunningTime="2026-02-19 07:06:51.904553615 +0000 UTC m=+6107.937876184" watchObservedRunningTime="2026-02-19 07:06:51.907132478 +0000 UTC m=+6107.940455047" Feb 19 07:06:53 crc kubenswrapper[5012]: E0219 07:06:53.682231 5012 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.110:43550->38.102.83.110:36123: write tcp 38.102.83.110:43550->38.102.83.110:36123: write: broken pipe Feb 19 07:06:54 crc kubenswrapper[5012]: I0219 07:06:54.988816 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:54 crc kubenswrapper[5012]: I0219 07:06:54.989140 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:55 crc kubenswrapper[5012]: I0219 07:06:55.065440 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:55 crc kubenswrapper[5012]: I0219 07:06:55.976222 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:56 crc kubenswrapper[5012]: I0219 07:06:56.248866 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpf8b"] Feb 19 07:06:57 crc kubenswrapper[5012]: I0219 07:06:57.578077 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:57 crc kubenswrapper[5012]: I0219 07:06:57.579212 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-745ds" Feb 19 07:06:57 crc kubenswrapper[5012]: I0219 07:06:57.947775 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bpf8b" podUID="3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" containerName="registry-server" containerID="cri-o://89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db" gracePeriod=2 Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.474298 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.592505 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl584\" (UniqueName: \"kubernetes.io/projected/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-kube-api-access-nl584\") pod \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.592596 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-catalog-content\") pod \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.592623 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-utilities\") pod \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\" (UID: \"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323\") " Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.593435 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-utilities" (OuterVolumeSpecName: "utilities") pod "3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" (UID: "3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.618937 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" (UID: "3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.619567 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-kube-api-access-nl584" (OuterVolumeSpecName: "kube-api-access-nl584") pod "3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" (UID: "3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323"). InnerVolumeSpecName "kube-api-access-nl584". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.633621 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-745ds" podUID="db783a8c-66a5-431b-bdb4-672b0e8991f1" containerName="registry-server" probeResult="failure" output=< Feb 19 07:06:58 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 07:06:58 crc kubenswrapper[5012]: > Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.694910 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl584\" (UniqueName: \"kubernetes.io/projected/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-kube-api-access-nl584\") on node \"crc\" DevicePath \"\"" Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.694938 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.694949 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.957317 5012 generic.go:334] "Generic (PLEG): container finished" podID="3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" containerID="89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db" exitCode=0 Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.957411 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpf8b" event={"ID":"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323","Type":"ContainerDied","Data":"89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db"} Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.957484 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpf8b" event={"ID":"3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323","Type":"ContainerDied","Data":"6067c6764c5a57d594d7f6654dd024f673786b231a6517e6b1c39cbb30d6b688"} Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.957502 5012 scope.go:117] "RemoveContainer" containerID="89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db" Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.957380 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpf8b" Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.979367 5012 scope.go:117] "RemoveContainer" containerID="2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1" Feb 19 07:06:58 crc kubenswrapper[5012]: I0219 07:06:58.996893 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpf8b"] Feb 19 07:06:59 crc kubenswrapper[5012]: I0219 07:06:59.010103 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpf8b"] Feb 19 07:06:59 crc kubenswrapper[5012]: I0219 07:06:59.017051 5012 scope.go:117] "RemoveContainer" containerID="347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577" Feb 19 07:06:59 crc kubenswrapper[5012]: I0219 07:06:59.074716 5012 scope.go:117] "RemoveContainer" containerID="89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db" Feb 19 07:06:59 crc kubenswrapper[5012]: E0219 07:06:59.075424 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db\": container with ID starting with 89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db not found: ID does not exist" containerID="89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db" Feb 19 07:06:59 crc kubenswrapper[5012]: I0219 07:06:59.075474 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db"} err="failed to get container status \"89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db\": rpc error: code = NotFound desc = could not find container \"89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db\": container with ID starting with 89a0114307b2cb5ba0324e5c84a225feed3a6c190300bcd6ae6064a809d7b1db not found: ID does not exist" Feb 19 07:06:59 crc kubenswrapper[5012]: I0219 07:06:59.075506 5012 scope.go:117] "RemoveContainer" containerID="2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1" Feb 19 07:06:59 crc kubenswrapper[5012]: E0219 07:06:59.075734 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1\": container with ID starting with 2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1 not found: ID does not exist" containerID="2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1" Feb 19 07:06:59 crc kubenswrapper[5012]: I0219 07:06:59.075759 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1"} err="failed to get container status \"2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1\": rpc error: code = NotFound desc = could not find container \"2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1\": container with ID starting with 2cad0831af9e3fcf892d030ea661cba7685a9f0784a6d007e093ceaaa2487fc1 not found: ID does not exist" Feb 19 07:06:59 crc kubenswrapper[5012]: I0219 07:06:59.075777 5012 scope.go:117] "RemoveContainer" containerID="347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577" Feb 19 07:06:59 crc kubenswrapper[5012]: E0219 07:06:59.075963 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577\": container with ID starting with 347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577 not found: ID does not exist" containerID="347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577" Feb 19 07:06:59 crc kubenswrapper[5012]: I0219 07:06:59.075989 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577"} err="failed to get container status \"347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577\": rpc error: code = NotFound desc = could not find container \"347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577\": container with ID starting with 347d932b2ef054cb98568dbfe5fe4c52e9d4a6b1ce2c5b40e619ca12db289577 not found: ID does not exist" Feb 19 07:07:00 crc kubenswrapper[5012]: I0219 07:07:00.711732 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" path="/var/lib/kubelet/pods/3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323/volumes" Feb 19 07:07:07 crc kubenswrapper[5012]: I0219 07:07:07.659814 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-745ds" Feb 19 07:07:07 crc kubenswrapper[5012]: I0219 07:07:07.731758 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-745ds" Feb 19 07:07:07 crc kubenswrapper[5012]: I0219 07:07:07.903504 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-745ds"] Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.065865 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-745ds" podUID="db783a8c-66a5-431b-bdb4-672b0e8991f1" containerName="registry-server" containerID="cri-o://64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55" gracePeriod=2 Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.671209 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-745ds" Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.728109 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f88j\" (UniqueName: \"kubernetes.io/projected/db783a8c-66a5-431b-bdb4-672b0e8991f1-kube-api-access-2f88j\") pod \"db783a8c-66a5-431b-bdb4-672b0e8991f1\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.728618 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-utilities\") pod \"db783a8c-66a5-431b-bdb4-672b0e8991f1\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.728726 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-catalog-content\") pod \"db783a8c-66a5-431b-bdb4-672b0e8991f1\" (UID: \"db783a8c-66a5-431b-bdb4-672b0e8991f1\") " Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.729653 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-utilities" (OuterVolumeSpecName: "utilities") pod "db783a8c-66a5-431b-bdb4-672b0e8991f1" (UID: "db783a8c-66a5-431b-bdb4-672b0e8991f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.736367 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db783a8c-66a5-431b-bdb4-672b0e8991f1-kube-api-access-2f88j" (OuterVolumeSpecName: "kube-api-access-2f88j") pod "db783a8c-66a5-431b-bdb4-672b0e8991f1" (UID: "db783a8c-66a5-431b-bdb4-672b0e8991f1"). InnerVolumeSpecName "kube-api-access-2f88j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.800247 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db783a8c-66a5-431b-bdb4-672b0e8991f1" (UID: "db783a8c-66a5-431b-bdb4-672b0e8991f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.831906 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.831947 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db783a8c-66a5-431b-bdb4-672b0e8991f1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 07:07:09 crc kubenswrapper[5012]: I0219 07:07:09.831964 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f88j\" (UniqueName: \"kubernetes.io/projected/db783a8c-66a5-431b-bdb4-672b0e8991f1-kube-api-access-2f88j\") on node \"crc\" DevicePath \"\"" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.083952 5012 generic.go:334] "Generic (PLEG): container finished" podID="db783a8c-66a5-431b-bdb4-672b0e8991f1" containerID="64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55" exitCode=0 Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.083998 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745ds" event={"ID":"db783a8c-66a5-431b-bdb4-672b0e8991f1","Type":"ContainerDied","Data":"64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55"} Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.084033 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-745ds" event={"ID":"db783a8c-66a5-431b-bdb4-672b0e8991f1","Type":"ContainerDied","Data":"664820afd3683b476014c50f161c941619369e5d67fbd8aec3b3d67bfa28ed97"} Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.084054 5012 scope.go:117] "RemoveContainer" containerID="64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.084060 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-745ds" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.113690 5012 scope.go:117] "RemoveContainer" containerID="64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.136525 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-745ds"] Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.159221 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-745ds"] Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.170030 5012 scope.go:117] "RemoveContainer" containerID="d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.220498 5012 scope.go:117] "RemoveContainer" containerID="64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55" Feb 19 07:07:10 crc kubenswrapper[5012]: E0219 07:07:10.221036 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55\": container with ID starting with 64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55 not found: ID does not exist" containerID="64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.221086 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55"} err="failed to get container status \"64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55\": rpc error: code = NotFound desc = could not find container \"64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55\": container with ID starting with 64ffdaedb84671c954a89f8116b7a0f4462af0e7c2c8f6e8a71691051913cd55 not found: ID does not exist" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.221119 5012 scope.go:117] "RemoveContainer" containerID="64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd" Feb 19 07:07:10 crc kubenswrapper[5012]: E0219 07:07:10.221542 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd\": container with ID starting with 64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd not found: ID does not exist" containerID="64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.221569 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd"} err="failed to get container status \"64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd\": rpc error: code = NotFound desc = could not find container \"64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd\": container with ID starting with 64494c24af816345862e39a44a8bca6a370bcdae5f5f892b47fc4c47871b35dd not found: ID does not exist" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.221586 5012 scope.go:117] "RemoveContainer" containerID="d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb" Feb 19 07:07:10 crc kubenswrapper[5012]: E0219 07:07:10.221834 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb\": container with ID starting with d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb not found: ID does not exist" containerID="d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.221855 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb"} err="failed to get container status \"d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb\": rpc error: code = NotFound desc = could not find container \"d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb\": container with ID starting with d35f0a2c37f43c36079189432aee39f98fd40b7349cd1b96f10ed85f39e723bb not found: ID does not exist" Feb 19 07:07:10 crc kubenswrapper[5012]: I0219 07:07:10.720860 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db783a8c-66a5-431b-bdb4-672b0e8991f1" path="/var/lib/kubelet/pods/db783a8c-66a5-431b-bdb4-672b0e8991f1/volumes" Feb 19 07:07:14 crc kubenswrapper[5012]: I0219 07:07:14.431335 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 07:07:14 crc kubenswrapper[5012]: I0219 07:07:14.431945 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 07:07:14 crc kubenswrapper[5012]: I0219 07:07:14.432012 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 07:07:14 crc kubenswrapper[5012]: I0219 07:07:14.433046 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"af85ae40f975af2e29f1da72c10ee6d4757cf3bb8cc11b605a9e59a2b37a565b"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 07:07:14 crc kubenswrapper[5012]: I0219 07:07:14.433142 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://af85ae40f975af2e29f1da72c10ee6d4757cf3bb8cc11b605a9e59a2b37a565b" gracePeriod=600 Feb 19 07:07:15 crc kubenswrapper[5012]: I0219 07:07:15.153253 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="af85ae40f975af2e29f1da72c10ee6d4757cf3bb8cc11b605a9e59a2b37a565b" exitCode=0 Feb 19 07:07:15 crc kubenswrapper[5012]: I0219 07:07:15.153480 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"af85ae40f975af2e29f1da72c10ee6d4757cf3bb8cc11b605a9e59a2b37a565b"} Feb 19 07:07:15 crc kubenswrapper[5012]: I0219 07:07:15.153722 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerStarted","Data":"ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3"} Feb 19 07:07:15 crc kubenswrapper[5012]: I0219 07:07:15.153750 5012 scope.go:117] "RemoveContainer" containerID="f2b3372a9e96e43cf3c6975c241c5877d6af50bb8ed6ffb360409b27fe5eec35" Feb 19 07:08:42 crc kubenswrapper[5012]: I0219 07:08:42.297674 5012 generic.go:334] "Generic (PLEG): container finished" podID="91bc1236-3737-44f8-a82a-35044bd3258b" containerID="746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251" exitCode=0 Feb 19 07:08:42 crc kubenswrapper[5012]: I0219 07:08:42.297779 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8nbsb/must-gather-znn9c" event={"ID":"91bc1236-3737-44f8-a82a-35044bd3258b","Type":"ContainerDied","Data":"746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251"} Feb 19 07:08:42 crc kubenswrapper[5012]: I0219 07:08:42.299693 5012 scope.go:117] "RemoveContainer" containerID="746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251" Feb 19 07:08:42 crc kubenswrapper[5012]: I0219 07:08:42.827434 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8nbsb_must-gather-znn9c_91bc1236-3737-44f8-a82a-35044bd3258b/gather/0.log" Feb 19 07:08:43 crc kubenswrapper[5012]: I0219 07:08:43.728700 5012 scope.go:117] "RemoveContainer" containerID="8925602fab983c962e968ddbebc86a948cd3945bd659b4613398d3aca81b02b2" Feb 19 07:08:54 crc kubenswrapper[5012]: I0219 07:08:54.418276 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8nbsb/must-gather-znn9c"] Feb 19 07:08:54 crc kubenswrapper[5012]: I0219 07:08:54.419261 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-8nbsb/must-gather-znn9c" podUID="91bc1236-3737-44f8-a82a-35044bd3258b" containerName="copy" containerID="cri-o://e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d" gracePeriod=2 Feb 19 07:08:54 crc kubenswrapper[5012]: I0219 07:08:54.431082 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8nbsb/must-gather-znn9c"] Feb 19 07:08:54 crc kubenswrapper[5012]: I0219 07:08:54.877713 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8nbsb_must-gather-znn9c_91bc1236-3737-44f8-a82a-35044bd3258b/copy/0.log" Feb 19 07:08:54 crc kubenswrapper[5012]: I0219 07:08:54.878597 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/must-gather-znn9c" Feb 19 07:08:54 crc kubenswrapper[5012]: I0219 07:08:54.966170 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh8mp\" (UniqueName: \"kubernetes.io/projected/91bc1236-3737-44f8-a82a-35044bd3258b-kube-api-access-fh8mp\") pod \"91bc1236-3737-44f8-a82a-35044bd3258b\" (UID: \"91bc1236-3737-44f8-a82a-35044bd3258b\") " Feb 19 07:08:54 crc kubenswrapper[5012]: I0219 07:08:54.966563 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/91bc1236-3737-44f8-a82a-35044bd3258b-must-gather-output\") pod \"91bc1236-3737-44f8-a82a-35044bd3258b\" (UID: \"91bc1236-3737-44f8-a82a-35044bd3258b\") " Feb 19 07:08:54 crc kubenswrapper[5012]: I0219 07:08:54.972099 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91bc1236-3737-44f8-a82a-35044bd3258b-kube-api-access-fh8mp" (OuterVolumeSpecName: "kube-api-access-fh8mp") pod "91bc1236-3737-44f8-a82a-35044bd3258b" (UID: "91bc1236-3737-44f8-a82a-35044bd3258b"). InnerVolumeSpecName "kube-api-access-fh8mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.068717 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh8mp\" (UniqueName: \"kubernetes.io/projected/91bc1236-3737-44f8-a82a-35044bd3258b-kube-api-access-fh8mp\") on node \"crc\" DevicePath \"\"" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.180483 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91bc1236-3737-44f8-a82a-35044bd3258b-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "91bc1236-3737-44f8-a82a-35044bd3258b" (UID: "91bc1236-3737-44f8-a82a-35044bd3258b"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.272530 5012 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/91bc1236-3737-44f8-a82a-35044bd3258b-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.445707 5012 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8nbsb_must-gather-znn9c_91bc1236-3737-44f8-a82a-35044bd3258b/copy/0.log" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.446154 5012 generic.go:334] "Generic (PLEG): container finished" podID="91bc1236-3737-44f8-a82a-35044bd3258b" containerID="e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d" exitCode=143 Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.446223 5012 scope.go:117] "RemoveContainer" containerID="e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.446243 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8nbsb/must-gather-znn9c" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.469514 5012 scope.go:117] "RemoveContainer" containerID="746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.592714 5012 scope.go:117] "RemoveContainer" containerID="e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d" Feb 19 07:08:55 crc kubenswrapper[5012]: E0219 07:08:55.593210 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d\": container with ID starting with e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d not found: ID does not exist" containerID="e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.593259 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d"} err="failed to get container status \"e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d\": rpc error: code = NotFound desc = could not find container \"e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d\": container with ID starting with e9e4646a6c49e467de2ceafcf11fa4389eb03d5dea2fa7316d61696772fb304d not found: ID does not exist" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.593284 5012 scope.go:117] "RemoveContainer" containerID="746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251" Feb 19 07:08:55 crc kubenswrapper[5012]: E0219 07:08:55.593839 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251\": container with ID starting with 746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251 not found: ID does not exist" containerID="746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251" Feb 19 07:08:55 crc kubenswrapper[5012]: I0219 07:08:55.593897 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251"} err="failed to get container status \"746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251\": rpc error: code = NotFound desc = could not find container \"746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251\": container with ID starting with 746b491a9b6c6b580e88640b99a103c5690180e3aac2fa05b604c7a52e7d3251 not found: ID does not exist" Feb 19 07:08:56 crc kubenswrapper[5012]: I0219 07:08:56.722860 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91bc1236-3737-44f8-a82a-35044bd3258b" path="/var/lib/kubelet/pods/91bc1236-3737-44f8-a82a-35044bd3258b/volumes" Feb 19 07:09:14 crc kubenswrapper[5012]: I0219 07:09:14.431260 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 07:09:14 crc kubenswrapper[5012]: I0219 07:09:14.431961 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 07:09:43 crc kubenswrapper[5012]: I0219 07:09:43.797550 5012 scope.go:117] "RemoveContainer" containerID="783537a7e84f3b0ed638f3eb6a2789d1dd27811c0584c5d95f222e682776f22b" Feb 19 07:09:44 crc kubenswrapper[5012]: I0219 07:09:44.431732 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 07:09:44 crc kubenswrapper[5012]: I0219 07:09:44.431827 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 07:10:14 crc kubenswrapper[5012]: I0219 07:10:14.431033 5012 patch_prober.go:28] interesting pod/machine-config-daemon-5lt44 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 07:10:14 crc kubenswrapper[5012]: I0219 07:10:14.431592 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 07:10:14 crc kubenswrapper[5012]: I0219 07:10:14.431641 5012 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" Feb 19 07:10:14 crc kubenswrapper[5012]: I0219 07:10:14.432447 5012 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3"} pod="openshift-machine-config-operator/machine-config-daemon-5lt44" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 07:10:14 crc kubenswrapper[5012]: I0219 07:10:14.432522 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerName="machine-config-daemon" containerID="cri-o://ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" gracePeriod=600 Feb 19 07:10:14 crc kubenswrapper[5012]: E0219 07:10:14.562420 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:10:15 crc kubenswrapper[5012]: I0219 07:10:15.366936 5012 generic.go:334] "Generic (PLEG): container finished" podID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" exitCode=0 Feb 19 07:10:15 crc kubenswrapper[5012]: I0219 07:10:15.367012 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" event={"ID":"f72c12f8-ba8a-4e43-aba7-f3c31a59181a","Type":"ContainerDied","Data":"ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3"} Feb 19 07:10:15 crc kubenswrapper[5012]: I0219 07:10:15.367088 5012 scope.go:117] "RemoveContainer" containerID="af85ae40f975af2e29f1da72c10ee6d4757cf3bb8cc11b605a9e59a2b37a565b" Feb 19 07:10:15 crc kubenswrapper[5012]: I0219 07:10:15.368108 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:10:15 crc kubenswrapper[5012]: E0219 07:10:15.368923 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:10:29 crc kubenswrapper[5012]: I0219 07:10:29.702102 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:10:29 crc kubenswrapper[5012]: E0219 07:10:29.702788 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:10:44 crc kubenswrapper[5012]: I0219 07:10:44.709051 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:10:44 crc kubenswrapper[5012]: E0219 07:10:44.710127 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:10:57 crc kubenswrapper[5012]: I0219 07:10:57.704387 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:10:57 crc kubenswrapper[5012]: E0219 07:10:57.705856 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:11:10 crc kubenswrapper[5012]: I0219 07:11:10.703432 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:11:10 crc kubenswrapper[5012]: E0219 07:11:10.704799 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:11:24 crc kubenswrapper[5012]: I0219 07:11:24.711978 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:11:24 crc kubenswrapper[5012]: E0219 07:11:24.713183 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:11:35 crc kubenswrapper[5012]: I0219 07:11:35.703502 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:11:35 crc kubenswrapper[5012]: E0219 07:11:35.704419 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:11:48 crc kubenswrapper[5012]: I0219 07:11:48.703142 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:11:48 crc kubenswrapper[5012]: E0219 07:11:48.704271 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:12:01 crc kubenswrapper[5012]: I0219 07:12:01.703072 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:12:01 crc kubenswrapper[5012]: E0219 07:12:01.703847 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:12:16 crc kubenswrapper[5012]: I0219 07:12:16.703900 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:12:16 crc kubenswrapper[5012]: E0219 07:12:16.705056 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:12:30 crc kubenswrapper[5012]: I0219 07:12:30.702821 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:12:30 crc kubenswrapper[5012]: E0219 07:12:30.703825 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:12:36 crc kubenswrapper[5012]: I0219 07:12:36.562775 5012 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-59bfbf7475-v98h9" podUID="4c9aa274-240d-4d50-b38a-754dd493f351" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 19 07:12:43 crc kubenswrapper[5012]: I0219 07:12:43.703843 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:12:43 crc kubenswrapper[5012]: E0219 07:12:43.704684 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.427473 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d6z6w"] Feb 19 07:12:57 crc kubenswrapper[5012]: E0219 07:12:57.428614 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db783a8c-66a5-431b-bdb4-672b0e8991f1" containerName="extract-utilities" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.428636 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="db783a8c-66a5-431b-bdb4-672b0e8991f1" containerName="extract-utilities" Feb 19 07:12:57 crc kubenswrapper[5012]: E0219 07:12:57.428653 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db783a8c-66a5-431b-bdb4-672b0e8991f1" containerName="extract-content" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.428663 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="db783a8c-66a5-431b-bdb4-672b0e8991f1" containerName="extract-content" Feb 19 07:12:57 crc kubenswrapper[5012]: E0219 07:12:57.428673 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91bc1236-3737-44f8-a82a-35044bd3258b" containerName="gather" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.428684 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="91bc1236-3737-44f8-a82a-35044bd3258b" containerName="gather" Feb 19 07:12:57 crc kubenswrapper[5012]: E0219 07:12:57.428707 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db783a8c-66a5-431b-bdb4-672b0e8991f1" containerName="registry-server" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.428716 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="db783a8c-66a5-431b-bdb4-672b0e8991f1" containerName="registry-server" Feb 19 07:12:57 crc kubenswrapper[5012]: E0219 07:12:57.428745 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91bc1236-3737-44f8-a82a-35044bd3258b" containerName="copy" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.428754 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="91bc1236-3737-44f8-a82a-35044bd3258b" containerName="copy" Feb 19 07:12:57 crc kubenswrapper[5012]: E0219 07:12:57.428783 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" containerName="registry-server" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.428793 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" containerName="registry-server" Feb 19 07:12:57 crc kubenswrapper[5012]: E0219 07:12:57.428810 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" containerName="extract-utilities" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.428819 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" containerName="extract-utilities" Feb 19 07:12:57 crc kubenswrapper[5012]: E0219 07:12:57.428844 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" containerName="extract-content" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.428854 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" containerName="extract-content" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.429155 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="91bc1236-3737-44f8-a82a-35044bd3258b" containerName="gather" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.429191 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="db783a8c-66a5-431b-bdb4-672b0e8991f1" containerName="registry-server" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.429213 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb2e867-1b7b-4ebe-9b8c-a8d0b4dc7323" containerName="registry-server" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.429226 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="91bc1236-3737-44f8-a82a-35044bd3258b" containerName="copy" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.432127 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d6z6w" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.439949 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2kjb\" (UniqueName: \"kubernetes.io/projected/0e918142-5969-452d-aeda-630ddbdd8a8f-kube-api-access-v2kjb\") pod \"certified-operators-d6z6w\" (UID: \"0e918142-5969-452d-aeda-630ddbdd8a8f\") " pod="openshift-marketplace/certified-operators-d6z6w" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.440228 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e918142-5969-452d-aeda-630ddbdd8a8f-utilities\") pod \"certified-operators-d6z6w\" (UID: \"0e918142-5969-452d-aeda-630ddbdd8a8f\") " pod="openshift-marketplace/certified-operators-d6z6w" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.440865 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e918142-5969-452d-aeda-630ddbdd8a8f-catalog-content\") pod \"certified-operators-d6z6w\" (UID: \"0e918142-5969-452d-aeda-630ddbdd8a8f\") " pod="openshift-marketplace/certified-operators-d6z6w" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.445649 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d6z6w"] Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.542222 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e918142-5969-452d-aeda-630ddbdd8a8f-utilities\") pod \"certified-operators-d6z6w\" (UID: \"0e918142-5969-452d-aeda-630ddbdd8a8f\") " pod="openshift-marketplace/certified-operators-d6z6w" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.542679 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e918142-5969-452d-aeda-630ddbdd8a8f-catalog-content\") pod \"certified-operators-d6z6w\" (UID: \"0e918142-5969-452d-aeda-630ddbdd8a8f\") " pod="openshift-marketplace/certified-operators-d6z6w" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.542714 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2kjb\" (UniqueName: \"kubernetes.io/projected/0e918142-5969-452d-aeda-630ddbdd8a8f-kube-api-access-v2kjb\") pod \"certified-operators-d6z6w\" (UID: \"0e918142-5969-452d-aeda-630ddbdd8a8f\") " pod="openshift-marketplace/certified-operators-d6z6w" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.543351 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e918142-5969-452d-aeda-630ddbdd8a8f-catalog-content\") pod \"certified-operators-d6z6w\" (UID: \"0e918142-5969-452d-aeda-630ddbdd8a8f\") " pod="openshift-marketplace/certified-operators-d6z6w" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.543504 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e918142-5969-452d-aeda-630ddbdd8a8f-utilities\") pod \"certified-operators-d6z6w\" (UID: \"0e918142-5969-452d-aeda-630ddbdd8a8f\") " pod="openshift-marketplace/certified-operators-d6z6w" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.569863 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2kjb\" (UniqueName: \"kubernetes.io/projected/0e918142-5969-452d-aeda-630ddbdd8a8f-kube-api-access-v2kjb\") pod \"certified-operators-d6z6w\" (UID: \"0e918142-5969-452d-aeda-630ddbdd8a8f\") " pod="openshift-marketplace/certified-operators-d6z6w" Feb 19 07:12:57 crc kubenswrapper[5012]: I0219 07:12:57.765737 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d6z6w" Feb 19 07:12:58 crc kubenswrapper[5012]: I0219 07:12:58.297889 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d6z6w"] Feb 19 07:12:58 crc kubenswrapper[5012]: I0219 07:12:58.703911 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:12:58 crc kubenswrapper[5012]: E0219 07:12:58.704765 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:12:58 crc kubenswrapper[5012]: I0219 07:12:58.982375 5012 generic.go:334] "Generic (PLEG): container finished" podID="0e918142-5969-452d-aeda-630ddbdd8a8f" containerID="91902f892faf17a372a888c1e63226d3c292b262a305a66367b535a3ba0188ed" exitCode=0 Feb 19 07:12:58 crc kubenswrapper[5012]: I0219 07:12:58.982437 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6z6w" event={"ID":"0e918142-5969-452d-aeda-630ddbdd8a8f","Type":"ContainerDied","Data":"91902f892faf17a372a888c1e63226d3c292b262a305a66367b535a3ba0188ed"} Feb 19 07:12:58 crc kubenswrapper[5012]: I0219 07:12:58.982476 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6z6w" event={"ID":"0e918142-5969-452d-aeda-630ddbdd8a8f","Type":"ContainerStarted","Data":"d35fd1c00e168954759546237960c168ed15b44a9e8f68514e4d4a8134c38a49"} Feb 19 07:12:58 crc kubenswrapper[5012]: I0219 07:12:58.987292 5012 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 07:13:00 crc kubenswrapper[5012]: I0219 07:13:00.022000 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6z6w" event={"ID":"0e918142-5969-452d-aeda-630ddbdd8a8f","Type":"ContainerStarted","Data":"aa29ecedcb2a3f0a748322fb0cb13446f815065e2664f474881766c2e305b35e"} Feb 19 07:13:01 crc kubenswrapper[5012]: I0219 07:13:01.039569 5012 generic.go:334] "Generic (PLEG): container finished" podID="0e918142-5969-452d-aeda-630ddbdd8a8f" containerID="aa29ecedcb2a3f0a748322fb0cb13446f815065e2664f474881766c2e305b35e" exitCode=0 Feb 19 07:13:01 crc kubenswrapper[5012]: I0219 07:13:01.039717 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6z6w" event={"ID":"0e918142-5969-452d-aeda-630ddbdd8a8f","Type":"ContainerDied","Data":"aa29ecedcb2a3f0a748322fb0cb13446f815065e2664f474881766c2e305b35e"} Feb 19 07:13:02 crc kubenswrapper[5012]: I0219 07:13:02.068364 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6z6w" event={"ID":"0e918142-5969-452d-aeda-630ddbdd8a8f","Type":"ContainerStarted","Data":"d3e6b1b1b49a139b8a1873b767eb773a60b7c395fd17da59a2c81caacf64d2b0"} Feb 19 07:13:02 crc kubenswrapper[5012]: I0219 07:13:02.096833 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d6z6w" podStartSLOduration=2.639073943 podStartE2EDuration="5.096794609s" podCreationTimestamp="2026-02-19 07:12:57 +0000 UTC" firstStartedPulling="2026-02-19 07:12:58.986748979 +0000 UTC m=+6475.020071588" lastFinishedPulling="2026-02-19 07:13:01.444469655 +0000 UTC m=+6477.477792254" observedRunningTime="2026-02-19 07:13:02.09069952 +0000 UTC m=+6478.124022099" watchObservedRunningTime="2026-02-19 07:13:02.096794609 +0000 UTC m=+6478.130117198" Feb 19 07:13:07 crc kubenswrapper[5012]: I0219 07:13:07.765928 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d6z6w" Feb 19 07:13:07 crc kubenswrapper[5012]: I0219 07:13:07.766685 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d6z6w" Feb 19 07:13:07 crc kubenswrapper[5012]: I0219 07:13:07.856618 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d6z6w" Feb 19 07:13:08 crc kubenswrapper[5012]: I0219 07:13:08.208094 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d6z6w" Feb 19 07:13:08 crc kubenswrapper[5012]: I0219 07:13:08.278407 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d6z6w"] Feb 19 07:13:10 crc kubenswrapper[5012]: I0219 07:13:10.155932 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d6z6w" podUID="0e918142-5969-452d-aeda-630ddbdd8a8f" containerName="registry-server" containerID="cri-o://d3e6b1b1b49a139b8a1873b767eb773a60b7c395fd17da59a2c81caacf64d2b0" gracePeriod=2 Feb 19 07:13:10 crc kubenswrapper[5012]: I0219 07:13:10.715376 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d6z6w" Feb 19 07:13:10 crc kubenswrapper[5012]: I0219 07:13:10.821913 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e918142-5969-452d-aeda-630ddbdd8a8f-utilities\") pod \"0e918142-5969-452d-aeda-630ddbdd8a8f\" (UID: \"0e918142-5969-452d-aeda-630ddbdd8a8f\") " Feb 19 07:13:10 crc kubenswrapper[5012]: I0219 07:13:10.822161 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e918142-5969-452d-aeda-630ddbdd8a8f-catalog-content\") pod \"0e918142-5969-452d-aeda-630ddbdd8a8f\" (UID: \"0e918142-5969-452d-aeda-630ddbdd8a8f\") " Feb 19 07:13:10 crc kubenswrapper[5012]: I0219 07:13:10.822218 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2kjb\" (UniqueName: \"kubernetes.io/projected/0e918142-5969-452d-aeda-630ddbdd8a8f-kube-api-access-v2kjb\") pod \"0e918142-5969-452d-aeda-630ddbdd8a8f\" (UID: \"0e918142-5969-452d-aeda-630ddbdd8a8f\") " Feb 19 07:13:10 crc kubenswrapper[5012]: I0219 07:13:10.823697 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e918142-5969-452d-aeda-630ddbdd8a8f-utilities" (OuterVolumeSpecName: "utilities") pod "0e918142-5969-452d-aeda-630ddbdd8a8f" (UID: "0e918142-5969-452d-aeda-630ddbdd8a8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:13:10 crc kubenswrapper[5012]: I0219 07:13:10.834631 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e918142-5969-452d-aeda-630ddbdd8a8f-kube-api-access-v2kjb" (OuterVolumeSpecName: "kube-api-access-v2kjb") pod "0e918142-5969-452d-aeda-630ddbdd8a8f" (UID: "0e918142-5969-452d-aeda-630ddbdd8a8f"). InnerVolumeSpecName "kube-api-access-v2kjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:13:10 crc kubenswrapper[5012]: I0219 07:13:10.890508 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e918142-5969-452d-aeda-630ddbdd8a8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e918142-5969-452d-aeda-630ddbdd8a8f" (UID: "0e918142-5969-452d-aeda-630ddbdd8a8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:13:10 crc kubenswrapper[5012]: I0219 07:13:10.924724 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e918142-5969-452d-aeda-630ddbdd8a8f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 07:13:10 crc kubenswrapper[5012]: I0219 07:13:10.924756 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e918142-5969-452d-aeda-630ddbdd8a8f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 07:13:10 crc kubenswrapper[5012]: I0219 07:13:10.924768 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2kjb\" (UniqueName: \"kubernetes.io/projected/0e918142-5969-452d-aeda-630ddbdd8a8f-kube-api-access-v2kjb\") on node \"crc\" DevicePath \"\"" Feb 19 07:13:11 crc kubenswrapper[5012]: I0219 07:13:11.170806 5012 generic.go:334] "Generic (PLEG): container finished" podID="0e918142-5969-452d-aeda-630ddbdd8a8f" containerID="d3e6b1b1b49a139b8a1873b767eb773a60b7c395fd17da59a2c81caacf64d2b0" exitCode=0 Feb 19 07:13:11 crc kubenswrapper[5012]: I0219 07:13:11.171015 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6z6w" event={"ID":"0e918142-5969-452d-aeda-630ddbdd8a8f","Type":"ContainerDied","Data":"d3e6b1b1b49a139b8a1873b767eb773a60b7c395fd17da59a2c81caacf64d2b0"} Feb 19 07:13:11 crc kubenswrapper[5012]: I0219 07:13:11.171239 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d6z6w" event={"ID":"0e918142-5969-452d-aeda-630ddbdd8a8f","Type":"ContainerDied","Data":"d35fd1c00e168954759546237960c168ed15b44a9e8f68514e4d4a8134c38a49"} Feb 19 07:13:11 crc kubenswrapper[5012]: I0219 07:13:11.171134 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d6z6w" Feb 19 07:13:11 crc kubenswrapper[5012]: I0219 07:13:11.171271 5012 scope.go:117] "RemoveContainer" containerID="d3e6b1b1b49a139b8a1873b767eb773a60b7c395fd17da59a2c81caacf64d2b0" Feb 19 07:13:11 crc kubenswrapper[5012]: I0219 07:13:11.192218 5012 scope.go:117] "RemoveContainer" containerID="aa29ecedcb2a3f0a748322fb0cb13446f815065e2664f474881766c2e305b35e" Feb 19 07:13:11 crc kubenswrapper[5012]: I0219 07:13:11.220729 5012 scope.go:117] "RemoveContainer" containerID="91902f892faf17a372a888c1e63226d3c292b262a305a66367b535a3ba0188ed" Feb 19 07:13:11 crc kubenswrapper[5012]: I0219 07:13:11.234965 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d6z6w"] Feb 19 07:13:11 crc kubenswrapper[5012]: I0219 07:13:11.246040 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d6z6w"] Feb 19 07:13:11 crc kubenswrapper[5012]: I0219 07:13:11.273595 5012 scope.go:117] "RemoveContainer" containerID="d3e6b1b1b49a139b8a1873b767eb773a60b7c395fd17da59a2c81caacf64d2b0" Feb 19 07:13:11 crc kubenswrapper[5012]: E0219 07:13:11.274172 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3e6b1b1b49a139b8a1873b767eb773a60b7c395fd17da59a2c81caacf64d2b0\": container with ID starting with d3e6b1b1b49a139b8a1873b767eb773a60b7c395fd17da59a2c81caacf64d2b0 not found: ID does not exist" containerID="d3e6b1b1b49a139b8a1873b767eb773a60b7c395fd17da59a2c81caacf64d2b0" Feb 19 07:13:11 crc kubenswrapper[5012]: I0219 07:13:11.274202 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3e6b1b1b49a139b8a1873b767eb773a60b7c395fd17da59a2c81caacf64d2b0"} err="failed to get container status \"d3e6b1b1b49a139b8a1873b767eb773a60b7c395fd17da59a2c81caacf64d2b0\": rpc error: code = NotFound desc = could not find container \"d3e6b1b1b49a139b8a1873b767eb773a60b7c395fd17da59a2c81caacf64d2b0\": container with ID starting with d3e6b1b1b49a139b8a1873b767eb773a60b7c395fd17da59a2c81caacf64d2b0 not found: ID does not exist" Feb 19 07:13:11 crc kubenswrapper[5012]: I0219 07:13:11.274222 5012 scope.go:117] "RemoveContainer" containerID="aa29ecedcb2a3f0a748322fb0cb13446f815065e2664f474881766c2e305b35e" Feb 19 07:13:11 crc kubenswrapper[5012]: E0219 07:13:11.274542 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa29ecedcb2a3f0a748322fb0cb13446f815065e2664f474881766c2e305b35e\": container with ID starting with aa29ecedcb2a3f0a748322fb0cb13446f815065e2664f474881766c2e305b35e not found: ID does not exist" containerID="aa29ecedcb2a3f0a748322fb0cb13446f815065e2664f474881766c2e305b35e" Feb 19 07:13:11 crc kubenswrapper[5012]: I0219 07:13:11.274584 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa29ecedcb2a3f0a748322fb0cb13446f815065e2664f474881766c2e305b35e"} err="failed to get container status \"aa29ecedcb2a3f0a748322fb0cb13446f815065e2664f474881766c2e305b35e\": rpc error: code = NotFound desc = could not find container \"aa29ecedcb2a3f0a748322fb0cb13446f815065e2664f474881766c2e305b35e\": container with ID starting with aa29ecedcb2a3f0a748322fb0cb13446f815065e2664f474881766c2e305b35e not found: ID does not exist" Feb 19 07:13:11 crc kubenswrapper[5012]: I0219 07:13:11.274610 5012 scope.go:117] "RemoveContainer" containerID="91902f892faf17a372a888c1e63226d3c292b262a305a66367b535a3ba0188ed" Feb 19 07:13:11 crc kubenswrapper[5012]: E0219 07:13:11.274966 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91902f892faf17a372a888c1e63226d3c292b262a305a66367b535a3ba0188ed\": container with ID starting with 91902f892faf17a372a888c1e63226d3c292b262a305a66367b535a3ba0188ed not found: ID does not exist" containerID="91902f892faf17a372a888c1e63226d3c292b262a305a66367b535a3ba0188ed" Feb 19 07:13:11 crc kubenswrapper[5012]: I0219 07:13:11.274992 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91902f892faf17a372a888c1e63226d3c292b262a305a66367b535a3ba0188ed"} err="failed to get container status \"91902f892faf17a372a888c1e63226d3c292b262a305a66367b535a3ba0188ed\": rpc error: code = NotFound desc = could not find container \"91902f892faf17a372a888c1e63226d3c292b262a305a66367b535a3ba0188ed\": container with ID starting with 91902f892faf17a372a888c1e63226d3c292b262a305a66367b535a3ba0188ed not found: ID does not exist" Feb 19 07:13:12 crc kubenswrapper[5012]: I0219 07:13:12.731840 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e918142-5969-452d-aeda-630ddbdd8a8f" path="/var/lib/kubelet/pods/0e918142-5969-452d-aeda-630ddbdd8a8f/volumes" Feb 19 07:13:13 crc kubenswrapper[5012]: I0219 07:13:13.703285 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:13:13 crc kubenswrapper[5012]: E0219 07:13:13.704041 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:13:21 crc kubenswrapper[5012]: I0219 07:13:21.539661 5012 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6hwl5"] Feb 19 07:13:21 crc kubenswrapper[5012]: E0219 07:13:21.541288 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e918142-5969-452d-aeda-630ddbdd8a8f" containerName="extract-content" Feb 19 07:13:21 crc kubenswrapper[5012]: I0219 07:13:21.541357 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e918142-5969-452d-aeda-630ddbdd8a8f" containerName="extract-content" Feb 19 07:13:21 crc kubenswrapper[5012]: E0219 07:13:21.541423 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e918142-5969-452d-aeda-630ddbdd8a8f" containerName="extract-utilities" Feb 19 07:13:21 crc kubenswrapper[5012]: I0219 07:13:21.541443 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e918142-5969-452d-aeda-630ddbdd8a8f" containerName="extract-utilities" Feb 19 07:13:21 crc kubenswrapper[5012]: E0219 07:13:21.541510 5012 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e918142-5969-452d-aeda-630ddbdd8a8f" containerName="registry-server" Feb 19 07:13:21 crc kubenswrapper[5012]: I0219 07:13:21.541529 5012 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e918142-5969-452d-aeda-630ddbdd8a8f" containerName="registry-server" Feb 19 07:13:21 crc kubenswrapper[5012]: I0219 07:13:21.542062 5012 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e918142-5969-452d-aeda-630ddbdd8a8f" containerName="registry-server" Feb 19 07:13:21 crc kubenswrapper[5012]: I0219 07:13:21.545206 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6hwl5" Feb 19 07:13:21 crc kubenswrapper[5012]: I0219 07:13:21.552033 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6hwl5"] Feb 19 07:13:21 crc kubenswrapper[5012]: I0219 07:13:21.564189 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5jh2\" (UniqueName: \"kubernetes.io/projected/6c0f0a0c-d332-496c-87ca-a881c4ea6275-kube-api-access-h5jh2\") pod \"redhat-operators-6hwl5\" (UID: \"6c0f0a0c-d332-496c-87ca-a881c4ea6275\") " pod="openshift-marketplace/redhat-operators-6hwl5" Feb 19 07:13:21 crc kubenswrapper[5012]: I0219 07:13:21.564358 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c0f0a0c-d332-496c-87ca-a881c4ea6275-catalog-content\") pod \"redhat-operators-6hwl5\" (UID: \"6c0f0a0c-d332-496c-87ca-a881c4ea6275\") " pod="openshift-marketplace/redhat-operators-6hwl5" Feb 19 07:13:21 crc kubenswrapper[5012]: I0219 07:13:21.564400 5012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c0f0a0c-d332-496c-87ca-a881c4ea6275-utilities\") pod \"redhat-operators-6hwl5\" (UID: \"6c0f0a0c-d332-496c-87ca-a881c4ea6275\") " pod="openshift-marketplace/redhat-operators-6hwl5" Feb 19 07:13:21 crc kubenswrapper[5012]: I0219 07:13:21.666569 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5jh2\" (UniqueName: \"kubernetes.io/projected/6c0f0a0c-d332-496c-87ca-a881c4ea6275-kube-api-access-h5jh2\") pod \"redhat-operators-6hwl5\" (UID: \"6c0f0a0c-d332-496c-87ca-a881c4ea6275\") " pod="openshift-marketplace/redhat-operators-6hwl5" Feb 19 07:13:21 crc kubenswrapper[5012]: I0219 07:13:21.666777 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c0f0a0c-d332-496c-87ca-a881c4ea6275-catalog-content\") pod \"redhat-operators-6hwl5\" (UID: \"6c0f0a0c-d332-496c-87ca-a881c4ea6275\") " pod="openshift-marketplace/redhat-operators-6hwl5" Feb 19 07:13:21 crc kubenswrapper[5012]: I0219 07:13:21.666833 5012 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c0f0a0c-d332-496c-87ca-a881c4ea6275-utilities\") pod \"redhat-operators-6hwl5\" (UID: \"6c0f0a0c-d332-496c-87ca-a881c4ea6275\") " pod="openshift-marketplace/redhat-operators-6hwl5" Feb 19 07:13:21 crc kubenswrapper[5012]: I0219 07:13:21.667418 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c0f0a0c-d332-496c-87ca-a881c4ea6275-catalog-content\") pod \"redhat-operators-6hwl5\" (UID: \"6c0f0a0c-d332-496c-87ca-a881c4ea6275\") " pod="openshift-marketplace/redhat-operators-6hwl5" Feb 19 07:13:21 crc kubenswrapper[5012]: I0219 07:13:21.667483 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c0f0a0c-d332-496c-87ca-a881c4ea6275-utilities\") pod \"redhat-operators-6hwl5\" (UID: \"6c0f0a0c-d332-496c-87ca-a881c4ea6275\") " pod="openshift-marketplace/redhat-operators-6hwl5" Feb 19 07:13:21 crc kubenswrapper[5012]: I0219 07:13:21.692422 5012 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5jh2\" (UniqueName: \"kubernetes.io/projected/6c0f0a0c-d332-496c-87ca-a881c4ea6275-kube-api-access-h5jh2\") pod \"redhat-operators-6hwl5\" (UID: \"6c0f0a0c-d332-496c-87ca-a881c4ea6275\") " pod="openshift-marketplace/redhat-operators-6hwl5" Feb 19 07:13:21 crc kubenswrapper[5012]: I0219 07:13:21.867796 5012 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6hwl5" Feb 19 07:13:22 crc kubenswrapper[5012]: I0219 07:13:22.367883 5012 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6hwl5"] Feb 19 07:13:23 crc kubenswrapper[5012]: I0219 07:13:23.340837 5012 generic.go:334] "Generic (PLEG): container finished" podID="6c0f0a0c-d332-496c-87ca-a881c4ea6275" containerID="238fbb0143236e5c3d7572740097795a037f1bf9bc0c8f23a0de9fbb002c66d7" exitCode=0 Feb 19 07:13:23 crc kubenswrapper[5012]: I0219 07:13:23.340933 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hwl5" event={"ID":"6c0f0a0c-d332-496c-87ca-a881c4ea6275","Type":"ContainerDied","Data":"238fbb0143236e5c3d7572740097795a037f1bf9bc0c8f23a0de9fbb002c66d7"} Feb 19 07:13:23 crc kubenswrapper[5012]: I0219 07:13:23.341371 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hwl5" event={"ID":"6c0f0a0c-d332-496c-87ca-a881c4ea6275","Type":"ContainerStarted","Data":"81bb167e3b9953773257a22f147720048be1c5622f6994fd1be3291c797e0333"} Feb 19 07:13:24 crc kubenswrapper[5012]: I0219 07:13:24.352947 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hwl5" event={"ID":"6c0f0a0c-d332-496c-87ca-a881c4ea6275","Type":"ContainerStarted","Data":"601c8d6370b2b78875dc2308f56dbbf1b6722a624687db04d6f32dde79aab703"} Feb 19 07:13:26 crc kubenswrapper[5012]: I0219 07:13:26.703917 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:13:26 crc kubenswrapper[5012]: E0219 07:13:26.705773 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:13:28 crc kubenswrapper[5012]: I0219 07:13:28.392855 5012 generic.go:334] "Generic (PLEG): container finished" podID="6c0f0a0c-d332-496c-87ca-a881c4ea6275" containerID="601c8d6370b2b78875dc2308f56dbbf1b6722a624687db04d6f32dde79aab703" exitCode=0 Feb 19 07:13:28 crc kubenswrapper[5012]: I0219 07:13:28.392945 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hwl5" event={"ID":"6c0f0a0c-d332-496c-87ca-a881c4ea6275","Type":"ContainerDied","Data":"601c8d6370b2b78875dc2308f56dbbf1b6722a624687db04d6f32dde79aab703"} Feb 19 07:13:29 crc kubenswrapper[5012]: I0219 07:13:29.405049 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hwl5" event={"ID":"6c0f0a0c-d332-496c-87ca-a881c4ea6275","Type":"ContainerStarted","Data":"c71015a239b48c283932084ff3820fdbf69bbf1386717c88914d3671244d96d5"} Feb 19 07:13:29 crc kubenswrapper[5012]: I0219 07:13:29.436796 5012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6hwl5" podStartSLOduration=2.972765217 podStartE2EDuration="8.436773223s" podCreationTimestamp="2026-02-19 07:13:21 +0000 UTC" firstStartedPulling="2026-02-19 07:13:23.342981464 +0000 UTC m=+6499.376304063" lastFinishedPulling="2026-02-19 07:13:28.80698946 +0000 UTC m=+6504.840312069" observedRunningTime="2026-02-19 07:13:29.432498958 +0000 UTC m=+6505.465821567" watchObservedRunningTime="2026-02-19 07:13:29.436773223 +0000 UTC m=+6505.470095792" Feb 19 07:13:31 crc kubenswrapper[5012]: I0219 07:13:31.868824 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6hwl5" Feb 19 07:13:31 crc kubenswrapper[5012]: I0219 07:13:31.869193 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6hwl5" Feb 19 07:13:32 crc kubenswrapper[5012]: I0219 07:13:32.931792 5012 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6hwl5" podUID="6c0f0a0c-d332-496c-87ca-a881c4ea6275" containerName="registry-server" probeResult="failure" output=< Feb 19 07:13:32 crc kubenswrapper[5012]: timeout: failed to connect service ":50051" within 1s Feb 19 07:13:32 crc kubenswrapper[5012]: > Feb 19 07:13:41 crc kubenswrapper[5012]: I0219 07:13:41.703891 5012 scope.go:117] "RemoveContainer" containerID="ac5f4bdfce1c6e24be02ecba6fe91ba6be7260813a4a32189a9502fc9a9ec7f3" Feb 19 07:13:41 crc kubenswrapper[5012]: E0219 07:13:41.704972 5012 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5lt44_openshift-machine-config-operator(f72c12f8-ba8a-4e43-aba7-f3c31a59181a)\"" pod="openshift-machine-config-operator/machine-config-daemon-5lt44" podUID="f72c12f8-ba8a-4e43-aba7-f3c31a59181a" Feb 19 07:13:41 crc kubenswrapper[5012]: I0219 07:13:41.953949 5012 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6hwl5" Feb 19 07:13:42 crc kubenswrapper[5012]: I0219 07:13:42.025750 5012 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6hwl5" Feb 19 07:13:42 crc kubenswrapper[5012]: I0219 07:13:42.218697 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6hwl5"] Feb 19 07:13:43 crc kubenswrapper[5012]: I0219 07:13:43.570689 5012 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6hwl5" podUID="6c0f0a0c-d332-496c-87ca-a881c4ea6275" containerName="registry-server" containerID="cri-o://c71015a239b48c283932084ff3820fdbf69bbf1386717c88914d3671244d96d5" gracePeriod=2 Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.164844 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6hwl5" Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.263028 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5jh2\" (UniqueName: \"kubernetes.io/projected/6c0f0a0c-d332-496c-87ca-a881c4ea6275-kube-api-access-h5jh2\") pod \"6c0f0a0c-d332-496c-87ca-a881c4ea6275\" (UID: \"6c0f0a0c-d332-496c-87ca-a881c4ea6275\") " Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.263147 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c0f0a0c-d332-496c-87ca-a881c4ea6275-utilities\") pod \"6c0f0a0c-d332-496c-87ca-a881c4ea6275\" (UID: \"6c0f0a0c-d332-496c-87ca-a881c4ea6275\") " Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.263356 5012 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c0f0a0c-d332-496c-87ca-a881c4ea6275-catalog-content\") pod \"6c0f0a0c-d332-496c-87ca-a881c4ea6275\" (UID: \"6c0f0a0c-d332-496c-87ca-a881c4ea6275\") " Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.264690 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c0f0a0c-d332-496c-87ca-a881c4ea6275-utilities" (OuterVolumeSpecName: "utilities") pod "6c0f0a0c-d332-496c-87ca-a881c4ea6275" (UID: "6c0f0a0c-d332-496c-87ca-a881c4ea6275"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.274509 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c0f0a0c-d332-496c-87ca-a881c4ea6275-kube-api-access-h5jh2" (OuterVolumeSpecName: "kube-api-access-h5jh2") pod "6c0f0a0c-d332-496c-87ca-a881c4ea6275" (UID: "6c0f0a0c-d332-496c-87ca-a881c4ea6275"). InnerVolumeSpecName "kube-api-access-h5jh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.365760 5012 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c0f0a0c-d332-496c-87ca-a881c4ea6275-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.365808 5012 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5jh2\" (UniqueName: \"kubernetes.io/projected/6c0f0a0c-d332-496c-87ca-a881c4ea6275-kube-api-access-h5jh2\") on node \"crc\" DevicePath \"\"" Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.415895 5012 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c0f0a0c-d332-496c-87ca-a881c4ea6275-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c0f0a0c-d332-496c-87ca-a881c4ea6275" (UID: "6c0f0a0c-d332-496c-87ca-a881c4ea6275"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.467208 5012 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c0f0a0c-d332-496c-87ca-a881c4ea6275-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.581659 5012 generic.go:334] "Generic (PLEG): container finished" podID="6c0f0a0c-d332-496c-87ca-a881c4ea6275" containerID="c71015a239b48c283932084ff3820fdbf69bbf1386717c88914d3671244d96d5" exitCode=0 Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.581719 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hwl5" event={"ID":"6c0f0a0c-d332-496c-87ca-a881c4ea6275","Type":"ContainerDied","Data":"c71015a239b48c283932084ff3820fdbf69bbf1386717c88914d3671244d96d5"} Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.581733 5012 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6hwl5" Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.581745 5012 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hwl5" event={"ID":"6c0f0a0c-d332-496c-87ca-a881c4ea6275","Type":"ContainerDied","Data":"81bb167e3b9953773257a22f147720048be1c5622f6994fd1be3291c797e0333"} Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.581763 5012 scope.go:117] "RemoveContainer" containerID="c71015a239b48c283932084ff3820fdbf69bbf1386717c88914d3671244d96d5" Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.603369 5012 scope.go:117] "RemoveContainer" containerID="601c8d6370b2b78875dc2308f56dbbf1b6722a624687db04d6f32dde79aab703" Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.617537 5012 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6hwl5"] Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.634630 5012 scope.go:117] "RemoveContainer" containerID="238fbb0143236e5c3d7572740097795a037f1bf9bc0c8f23a0de9fbb002c66d7" Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.641835 5012 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6hwl5"] Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.680658 5012 scope.go:117] "RemoveContainer" containerID="c71015a239b48c283932084ff3820fdbf69bbf1386717c88914d3671244d96d5" Feb 19 07:13:44 crc kubenswrapper[5012]: E0219 07:13:44.681071 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c71015a239b48c283932084ff3820fdbf69bbf1386717c88914d3671244d96d5\": container with ID starting with c71015a239b48c283932084ff3820fdbf69bbf1386717c88914d3671244d96d5 not found: ID does not exist" containerID="c71015a239b48c283932084ff3820fdbf69bbf1386717c88914d3671244d96d5" Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.681121 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c71015a239b48c283932084ff3820fdbf69bbf1386717c88914d3671244d96d5"} err="failed to get container status \"c71015a239b48c283932084ff3820fdbf69bbf1386717c88914d3671244d96d5\": rpc error: code = NotFound desc = could not find container \"c71015a239b48c283932084ff3820fdbf69bbf1386717c88914d3671244d96d5\": container with ID starting with c71015a239b48c283932084ff3820fdbf69bbf1386717c88914d3671244d96d5 not found: ID does not exist" Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.681146 5012 scope.go:117] "RemoveContainer" containerID="601c8d6370b2b78875dc2308f56dbbf1b6722a624687db04d6f32dde79aab703" Feb 19 07:13:44 crc kubenswrapper[5012]: E0219 07:13:44.681504 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"601c8d6370b2b78875dc2308f56dbbf1b6722a624687db04d6f32dde79aab703\": container with ID starting with 601c8d6370b2b78875dc2308f56dbbf1b6722a624687db04d6f32dde79aab703 not found: ID does not exist" containerID="601c8d6370b2b78875dc2308f56dbbf1b6722a624687db04d6f32dde79aab703" Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.681536 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"601c8d6370b2b78875dc2308f56dbbf1b6722a624687db04d6f32dde79aab703"} err="failed to get container status \"601c8d6370b2b78875dc2308f56dbbf1b6722a624687db04d6f32dde79aab703\": rpc error: code = NotFound desc = could not find container \"601c8d6370b2b78875dc2308f56dbbf1b6722a624687db04d6f32dde79aab703\": container with ID starting with 601c8d6370b2b78875dc2308f56dbbf1b6722a624687db04d6f32dde79aab703 not found: ID does not exist" Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.681557 5012 scope.go:117] "RemoveContainer" containerID="238fbb0143236e5c3d7572740097795a037f1bf9bc0c8f23a0de9fbb002c66d7" Feb 19 07:13:44 crc kubenswrapper[5012]: E0219 07:13:44.683158 5012 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"238fbb0143236e5c3d7572740097795a037f1bf9bc0c8f23a0de9fbb002c66d7\": container with ID starting with 238fbb0143236e5c3d7572740097795a037f1bf9bc0c8f23a0de9fbb002c66d7 not found: ID does not exist" containerID="238fbb0143236e5c3d7572740097795a037f1bf9bc0c8f23a0de9fbb002c66d7" Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.683184 5012 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"238fbb0143236e5c3d7572740097795a037f1bf9bc0c8f23a0de9fbb002c66d7"} err="failed to get container status \"238fbb0143236e5c3d7572740097795a037f1bf9bc0c8f23a0de9fbb002c66d7\": rpc error: code = NotFound desc = could not find container \"238fbb0143236e5c3d7572740097795a037f1bf9bc0c8f23a0de9fbb002c66d7\": container with ID starting with 238fbb0143236e5c3d7572740097795a037f1bf9bc0c8f23a0de9fbb002c66d7 not found: ID does not exist" Feb 19 07:13:44 crc kubenswrapper[5012]: I0219 07:13:44.714721 5012 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c0f0a0c-d332-496c-87ca-a881c4ea6275" path="/var/lib/kubelet/pods/6c0f0a0c-d332-496c-87ca-a881c4ea6275/volumes" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515145534064024454 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015145534064017371 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015145516713016515 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015145516713015465 5ustar corecore